The Design of Lorax: Helping Communities Tackle Climate Change
Two important things to know about the climate crisis:
- If an individual reduces their own carbon footprint to 0, that’s a total impact of 0.0000000003% on global carbon emissions (The Nature Conservancy & Our World in Data, 2021), which is negligible.
- Individuals still have the power to create chain reactions that impact the ways their friends, communities and governments react to the climate crisis, which is crucial to solving the climate crisis.
Knowing this, we wanted to build climate-aware communities. Lorax aims to facilitate these chain reactions by helping individuals see a clear picture of their impact, understand their contributions through local community initiatives and consumer actions, and incentivize individuals to spread their actions across the community.
We aim to connect individuals with their communities in order for them to share their green journeys with their neighbours and peers to incentivise them to improve their own lifestyle. In order to accomplish that, we want to provide a polished, understandable, and simple user experience.
So who does Lorax target? These personas indicate the spectrum of users on the environmentally-aware scale.
Jessie Perez: Ultimate Environmental Activist
24-year-old climate activist, Female, Single, Brooklyn NYC, Degree in reporting and environment
- Actively lives sustainably (commuting, vegetarian, zero-waste)
- Runs meetings and organizes protests to bring about climate awareness
- Runs a podcast, YouTube, and Ted Talk series where she reports on climate change solutions
Joanna Simpson: Learning about sustainability, asking how to help
32-year-old marketing specialist, Female, In A Relationship, San Francisco CA
- Expanded knowledge through podcasts and books recommended by climate activists
- Trying to live sustainably (plant-based diet, reduced personal waste, shopping locally)
- Would like to involve people in her circles to live more sustainably
Bill Irving: Interested, but not sure where to start
36-year-old automotive salesman and father, Male, Single Parent, Calgary AB
- Limited knowledge about climate change, but scared about what will be left for his daughter
- Motivated by his daughter to learn more about what he can do to live sustainably
- Unsure where to start or what to change first
Sam Choi: Nonchalant, minimally tries to be “green”
21-year-old student, Female, Single, Waterloo ON
- Rarely thinks about the environment because she prioritizes more immediate concerns
- Always picks the most convenient options (driving > transit, one-time use > reusable)
Delon Wright: Indifferent about environmental issues
26-year-old IT Technician, Male, Single, Sacramento CA
- Unaware of climate change
After defining the user personas, we created empathy mappings which allowed us to better hypothesize their common pain points and goals.
Once we completed these personas, we were ready to test assumptions and draw out surprising insights through user interviews.
We interviewed 6 unique users during the first few weeks of the project. The most important demographic factor we considered was how environmentally aware the user was and the extent of their involvement in environmental justice. Some interviewees aligned with solving these environmental issues, while others rejected climate change almost entirely. Our interviewees spanned the age group between 20–50 which allowed us to understand the values and needs of a variety of age groups.
In order to grasp a detailed understanding of the user’s needs, we used various methods to perform and analyze the interviews. These include preparing high quality interview questions, creating affinity diagrams, and analyzing work models.
Coming Up with Interview Questions
Each of our interviews took 30 minutes and we created a plan to ask questions related to 3 general themes:
- The user’s community
- The user’s sources of news and information
- The user’s daily lifestyles
Throughout the question brainstorming process, we applied course knowledge to keep our questions not too broad or too narrow, while minimizing leading questions. We wanted to elicit more open-ended discussion. However, through feedback from a buddy team and from our TA, Jean, we discovered that our initial batch of questions still contained these flaws.
For example, a question we devised regarding the user’s community was:
“What do you talk to your friends or family about?”,
which can be leading and narrowly scoped. Instead, the following rephrasing of the question would potentially help the user to think about all the different interactions they have with their neighbours:
“Could you please describe your relationship/interactions with your neighbors?”.
This feedback encouraged us to iterate on and improve our set of interview questions.
This notion of iteration and improvement did not stop there. As we performed user interviews, we would reflect on our feedback to make adjustments and improvements for the following interviews. In one instance, we noticed that when we asked a user a question early on about environmental news that they have followed, the rest of their responses were biased with a desire to do something about the climate. After this interview, we updated the order of the questions in hopes of getting less environmentally-conscience related responses in order to get an unbiased understanding of our user’s thoughts and intentions.
After we conducted enough interviews to represent a majority of the personas, we organized our notes into an affinity diagram.
Up to this point in time, our interviewees ranged from persona 3 to 5 (i.e. from climate-neutral to climate-skeptic). Affinity diagramming helped us organize each individual’s points into hierarchies and discover common issues experienced by all users. Notable themes that we discovered are as follows:
- Individuals are aware of human impacts on the planet — but only in terms of directly observable actions like littering and polluting. There is much more to the climate crisis than the visible issues our interviewees knew about, and this interesting insight pushed our product to provide more transparency and knowledge about the crisis.
- There was a lack of community within our interviewees’ local neighbourhoods. This was a crucial, non-obvious discovery because it highlighted early on that a community-driven approach to sustainability may encounter additional hurdles if neighbours do not interact with each other in the first place.
- Interviewees felt helpless in their ability to make a positive impact on the environment and believed that corporations hold the key to solving the climate crisis.
Affinity diagramming enabled us to systematically group common points together, which enabled us to easily come up with a shortlist of features that directly targeted the sustainability challenges at hand.
After completing our final user interviews, we also gained enough knowledge to build work models in order to better understand a user’s day-to-day actions.
We focused on creating two different work models, a Sequence Model and a Cultural Model.
Our sequence model focused on the customer journey of purchasing everyday items. Here we focused on common themes and breakdowns (problems).
- Lack of convenient options in one’s neighbourhood,
- Lack of information about which products are sustainable and where to locate these products
- The importance of product recommendations and acknowledgements from friends
- The user is much less inclined to travel farther away to purchase the same class of goods
- The user can not find the recommended product (which was a sustainable option) and instead bought a product with wasteful single-use packaging
- The user was unsure about the packaging and threw it in the garbage, assuming it was single-use only
The themes and breakdowns encouraged us to incorporate more social aspects for friends to review and share products with each other, providing more information on more sustainable retailers.
Our cultural model helped us to better understand how a persona 1 or 2 (very environmentally concerned) user perceived and interacted with their environment.
- Conflict of self-interest vs. what’s best for the environment
- Lack of discussion on climate topics amongst friends and family, news outlets, and in politics
- Expensive sustainable options with “green premiums” resulting in a high barrier to entry to live sustainably
- Political polarization regarding the validity of climate change
- Pollution corporations constantly placing the blame on individual choice
The themes and breakdowns from the cultural model motivated us to improve the initial design of involvement opportunities to promote sustainability discussions, to generate helpful purchase alternatives to battle self interest, and to provide transparent information to help individuals make informed decisions within the app.
Overall, through our interviews and analysis process, we decided to focus on the following challenges to work towards our vision of building climate-aware communities:
- Enable individuals with enough information and motivation to make informed sustainable decisions
- Provide a platform for users to participate in local communal opportunities that tackle climate change
- Get people to become more connected and communicative with their community
By this point, we completed a substantial amount of user research so it was finally time to begin creating the features for our app.
We initially synthesized our research into 5 main features: Donations, Initiatives, Shopping, Gamification, and Social Feed. We specifically chose these features to allow individuals to live sustainably, to see their impact, and to be connected to their community.
To further expand on our features, we conducted various brainstorming and design activities such as working through design arguments, user stories, crazy 8, and storyboarding. Each activity laid the foundation for what our app would include. Then, by applying these activity results, we drafted up initial sketches for each feature.
Starting with design arguments, we had to figure out the purpose and impact of each feature. We determined the main problem that each feature tacked along with the solution that our app would provide.
With the design arguments finished, we wrote user stories to describe what we expected the user desires to be for each feature. We took inspiration from our design arguments and user interviews to formulate realistic and relevant user stories.
Our Crazy 8 brainstorming session allowed us to quickly jot down 8 different types of layouts for each feature after discussing what we wanted on the app and what behaviour the features should have.
Looking at the Crazy 8 designs, they were very roughly sketched and highly experimental. We drew inspiration from our own experiences and what we had seen before in other apps. Then we discussed each view and eventually voted for our favourite ones, finalizing the design we wanted for our initial draft.
Likewise, we took into account the overall purpose and goals of the users and created storyboards for each feature. The finalized crazy 8 design and user story were used to derive the context of the scene. We looked at the circumstances in which the features would be used and drew these scenarios.
Initial Sketches + User Flows
At last, we made our first sketches, linking them together to create a user flow for each feature with the rough views from crazy 8s, storyboarding, and our original problem statement in mind.
By combining the data we gathered from our interviews with our prior HCI knowledge, we were able to set a solid user-driven foundation for Lorax.
Paper Prototypes and Evaluation
Based on our initial designs, we created paper prototypes — sketches of individual screens for each feature along with their corresponding input elements and state changes. The goal of the paper prototype is to be able to perform many iterations of our design based on user feedback quickly and efficiently. Each feature has its own goals, tasks, and results.
We’ll go through one feature in-depth, as an example.
Shopping Feature Case Study
Goals and Hypotheses
The goal for this feature was to help users compare and find information about product alternatives that are better for the environment through a green score. By building an initial prototype of the feature, we hypothesized that users would like to look up items based on category or by searching the items. Additionally, we assumed that users would like carousel photo scrolling and that logging the item purchase was easy to accomplish. By testing these hypotheses with targeted tasks, we wanted to disprove our own assumptions.
The specific tasks we initially decided to go with are as follows:
- Search for a category of product alternatives (e.g. cars, household items) and select an appropriate alternative
- Find more information about a product alternative
- Allow users to track products they have purchased
The reason why we needed these tasks were:
- Ensure the user knows how to navigate the product category page and search features
- Ensure the user knows how to interact with the various help buttons and detail panels on the item page
From our paper prototype interviews, we obtained the following feedback:
- Users did not like the category view, and would rather have the product list view as the home page
- Individuals prefer searching to scrolling when looking for a specific category
- Better clarity on messaging and interactive components like buttons and question mark indicators
- Ambiguity with purpose, icons, measurements of G scores
- Discrepancy between opinions for viewing photos, some people like carousel, others preferred thumbnails
The main themes that came out of the Shopping prototype were that:
Summarized information should be easy to access, with details just a tap away
- Prompted us to merge the category and product views together onto one page
Allow users to perform actions quickly
- Prompted us to emphasize the ability to search for products or categories
- Suggested that we should make our components more unified and use clear language
People have different preferences for the item’s photo display style which does not really impact the functionality of the product
- Suggested for us to carefully examine the options but not spend too much time on it and just make the final call
Our findings surprised us greatly. For example, we did not expect such a harsh response to the category view and it was non-obvious to us how users liked to start off browsing items rather than needing to pinpoint a category or a searched item. We did not take into consideration the specific use case of browsing with our first iteration of the design, however after understanding the user behaviour and intentions, we heavily modified our next iteration and, eventually, our Figma designs with this feedback.
General Evaluation Results
The following are the overall themes from our feedback:
Designs were unintuitive and difficult to use.
Why? This may have been because we were not making use of common signifiers and we prioritized nailing down the behaviour rather than sympathizing with our users on what seemed intuitive. The prototype evaluations allowed us to shift our priorities and check our biases.
Too much clutter, overwhelming the user
Why? While some areas were left ambiguous, we also tended to overcompensate in displaying as much information as possible. For example, our gamification feature originally had a dozen question mark buttons and this greatly confused our users, even though our intention was to explain confusing keywords.
Contrasting opinions with different interviewees
Why? Sometimes people had certain personal preferences, and this allowed us to take the time to reflect upon the versions of our designs. We would weigh the pros and cons of each opinion and make a final judgement on what we would like to do.
No design is perfect.
We quickly discovered that as we conducted our paper prototype interviews. Our designs were less intuitive than we assumed and our interviewees also had conflicting opinions.
Luckily, the paper prototyping process allowed us to quickly iterate on our design as it only took a session to update screens and rejig flows to get ready for the next round. So with our new findings, design iteration is exactly what we did.
Based on the feedback from the paper prototype sessions, we identified several key areas where our designs could improve. From a high-level, we wanted our many features to mesh together, creating a unified experience.
For the gamification feature, we believed it would perform better integrated into other features to add a layer of delight and motivation to the whole app rather than be a standalone feature. This led to the de-scope of the store feature and the profile design evolving to show more social features such as badges, leveling, and leaderboard ranking.
For the shopping feature, the paper prototype feedback gave us insight that users wanted a more familiar ecommerce experience. Thus, we redesigned the home page allowing users to view a wider variety of items before selecting a category, which would appear as a side bar pop up. Additionally, we added a new Categories filter that lets users search via a specific category.
For community initiatives, during the paper prototypes, testers voiced how they wanted the ability to see upcoming, saved and past initiatives as the anchor page for this feature instead of in the “Your Initiatives” tab. Furthermore, testers voiced the challenges of figuring out how to track past initiatives, so we clarified that process as well. Then tying into the rest of gamification, the new designs also reflect an integrated points system.
For the social feed, during testing, users felt that there was not enough content on the timeline. So we revised the designs to allow the creation of custom posts for purchasing alternatives, initiatives, badges, and donations. Additionally, this design change would contribute to the app feeling a lot more integrated and unified.
High Fidelity Prototypes and Evaluation
Goals for High Fidelity Evaluation
For the high fidelity prototype, we translated our paper prototype into Figma designs. Having put our paper prototypes through extensive testing, we created two goals for our high fidelity evaluation results.
Our primary goal was to create an aesthetic product and see if the visual design resonated well with our target users.
Our secondary goal was to see whether our paper flows translated well into a high fidelity medium.
Heuristic Evaluation: Heuristics
The heuristic evaluation allowed us to get experts to evaluate our app and provide feedback. To do this, we chose 5 categories and created a set of tasks for the user to go through.
Flexibility and ease of use
- Ensure that features make sense for beginners and new users, providing a positive experience for them
Visibility of system status
- Allow us to understand whether the user has sufficient/appropriate amounts of signifiers and feedback
Aesthetic and minimalist design
- Allow us to understand whether the features provide the right amount of information and guidance on each screen to users
- Ensure that relevant information is highly visible
Consistency and standards
- Ensure standards such as date and time conventions (MM/DD/YY vs DD/MM/YY) are unambiguous
- Ensure labels, features, and icons are clear and consistent
Match between system and the real world
- Determine whether the language we use is clear and concise for our users
- Ensure that our users understand the info and directions in our app
Heuristic Evaluation: Tasks
- Make a donation to “Tree planting in Vancouver” and check your Social feed for others’ reactions to your donation.
- Look at the sustainability details for a Tesla Model S. Assume you have purchased one and have a receipt, then record your purchase in the app.
- Look for initiatives happening in Waterloo and find the “Tree Planting — Waterloo Initiative” and register for July 17. Then assume you have participated in the event, and record the total number of trees planted.
Heuristic Evaluation: Results
Through the heuristic evaluations, we learned that we needed to focus on the heuristics of Consistency and Flexibility & Ease of Use. This translated to a handful of improvements that we added to our app design.
Users preferred more transparency in the Donations feature, especially for what their money was going to be used for within the organization. Along with this, users noticed small inconsistencies within the app from word choice, to feature names, to the look of buttons and other interactive components within the app.
Flexibility & Ease of Use
Users were intrigued to see the activities that their friends were involved in and wanted a way to quickly check out the initiative, donation, or product their friend was sharing.
Cognitive Walkthrough: Tasks, Results
For the cognitive walkthrough, we gave participants 3 main tasks to cover the usage for the majority of features.
Make a donation to “Tree planting in Vancouver” and check your Social feed for others’ reactions to your donation.
- Almost all participants intuitively understood how to navigate the app and go through the donations payment flow.
- The only failure stories that appeared were concerning the credit card details because the prototype design made it seem as if the credit card information fields were not editable, leaving users frustrated in this final crucial step.
Look at the sustainability details for a Tesla Model S. Assume you have purchased one and have a receipt, then record your purchase in the app.
- Across all participants, they successfully recorded their green purchase.
- The only failure story was that participants initially used the search bar to find the Model S, however, they were still able to complete the task after looking around the app.
Look for initiatives happening in Waterloo and find the “Tree Planting — Waterloo Initiative” and register for July 17. Then assume you have participated in the event, and record the total number of trees planted.
- Testers had some trouble with this task. One tester’s mental model was that it was logical to log initiatives in the profile page, having missed the past initiative tab.
- Once guided to the tab, some testers also could not see the past initiatives feature because it was not prominent enough.
- However, after minimal guidance and pointing out specific features their eyes missed, all participants were able to successfully complete the task.
High Fidelity Design Changes and Iterations
Our primary goal was achieving a visual design that resonated well with our target users, based on the heuristic evaluation and cognitive walkthrough feedback. We polished up some pixels and made the copy (titles, feature names, initiative names) consistent throughout the app. Through the following iteration changes, we accomplished our goal.
The social feed was easy to browse through, but users wanted a quick way to see details about activities that their friends participated in. Design-wise, users had difficulty reading the comments on the feed due to the small font size.
Changes on next iteration:
- Allow users to “learn more” about a specific donation or initiative that a friend did and deep-link them directly to the donation/initiative page
- Standardized the font sizes on the entire app (not only the social feed), making sure that every component was readable and large enough to be clear
In terms of the fluidity of the user experience, it was a shock for us that we wrongly assumed the user would be interested in logging their contributions in Initiatives as a way to feel good about their progress. This discovery explained why they didn’t find logging intuitive. Users did not want to have to add every detail of the event manually as it can be tedious and easily forgotten by the user.
Changes on next iteration:
- We removed additional unpleasant whitespace and extra unnecessary pages that the user had to click through, simplifying the sequence that the user goes through
- Dates automatically populated the logging form to tie together the registered event and the past event logging, allowing them to focus on impact and results more than menial details
- User logging difficulties motivated us to change the user flow to be more concise and enticing. Then we interviewed more people after these changes to ensure that they were a step in the right direction
People were interested in seeing more information on the cause they were donating to for the Donation feature. Based on feedback for Consistency and Standards from the heuristic interview, our users wanted to see more transparency in this feature. Additional helpful updates would be to clarify the payment screens and the input of values.
Changes on next iteration:
- Included a “Learn more” button, for users who are curious about exactly how their money will be used which will show users a pie chart displaying where the portions of the donations are going (activism, marketing, tools, etc.)
- Used more indicators for filled out credit card fields, showing users what has been inputted
Unfortunately, the shopping feature was the most controversial feature. People were either confused about its purpose or strongly disliked the original “wall-of-text” design it had. Green score was an undefined term and many users did not know what it meant. The first design was almost too simple and did not provide enough value for the users.
Changes on next iteration:
- Adding an info button describing how green score is calculated
- To replace the “wall-of-text”, we added icons, stats, and attributes of the item for improving user comprehension
- We clarified the logging of the purchase and tied it together with the gamification, unifying this feature with the rest of the app
Interviewees generally found the profile appealing and straightforward, with the only big improvement that the leaderboard should showcase the user’s standing more clearly.
Changes on next iteration:
- We ranked the leaderboard and highlighted the user placement on the leaderboard
The process of designing Lorax from scratch was exhilarating. Surprisingly, there was a lot more research and iteration that went into the process of realizing our conceptual ideas. Along the way, new knowledge gained from user interviews and research humbled us and motivated us to continually address our biases and shape our vision to fulfill the ever-changing requirements.
Lorax addressed the problems that we set out to solve at the beginning of the term by amplifying individual actions with community interactions.
There are features that we would love to continue to explore: a community points system, community vs. community competitions, adding custom community initiatives, and more. However, we weren’t able to complete these due to time constraints.
If we were to redo this project, we would aim to iterate as a unified app rather than as 5 separate features. Since we originally split the work to do each feature individually, this made it harder to introduce unifying features like gamification.
At last, we built a community-driven platform for tackling climate change. Lorax guides and connects communities in taking that first step to making the world more sustainable.
With the ability to share purchase alternatives with friends, track upcoming local events, and drive change together with donations, Lorax impacts the way we think about tackling the climate crisis. Lorax changes the question of “What can I do?” to “What can we do?”.
Now it’s up to us to take action with our communities to solve this global crisis, together.
Ritchie, H.; Roser, M. (2020, May 11). Greenhouse gas emissions. Our World in Data. https://ourworldindata.org/greenhouse-gas-emissions.
What is your carbon footprint? The Nature Conservancy. (n.d.). https://www.nature.org/en-us/get-involved/how-to-help/carbon-footprint-calculator/.