Nathan/ Your work is so much greater than just impact but your experience and journey have so clarified and informed impact for me, I’d like to ask you what counts as impact?
Minnie/ There are so many ways to define and measure impact, from the social context to so many broader contexts, such as the 10 versions of Capital: ecological impact, social impact, financial impact, etc. What I tend to focus on in my practice is the individual and the organization and the downstream implications of decisions we make as product designers every day. I start at the human level and then scale from there. It’s important to get super granular: what are the guardrails we need to set up to be conscious of how an individual is going to be affected. In my work, we ask questions like, “what are the ramifications of mental health when people are using technology all the time.” We know that there’s negative impact on young teen girls, for example, when they’re using social media but, why? What’s the cause? If we can understand the causation, we better set ourselves up with the education and guardrails to prevent that.
So, I usually focus on the human scale, including community impacts and then scale to business implications from there. We can look at these other layers too, environmental impact or societal impact, but I wouldn’t call myself an expert on the things that sit outside of the human condition.
Nathan/ How do you start that process inside of companies?
Minnie/ I always encourage people to talk about their theory of change. Historically, every socially minded organization has their theory of change. And I think we that that should be true for profit-minded organizations, as well. There should be an intended challenge to be solved and the outcomes associated with that intervention. A theory of change lays-out the series of inputs and interventions, the activities needed in order to support those, the opportunities, the outcomes, and the measurement. And I think by being really concrete about what, exactly, are going be the indicators of success, we can we see when change happens and what are we can do to revise the intervention when it doesn’t.
It can be challenging both in-house and in the tech community—especially with smaller scale projects. If we don’t know what, exactly, we’re measuring and the outcomes we expect, we set ourselves up to lose sight of the impacts—especially the social ones. I also want to mention the Ethical Design Toolkit. It focuses on three core questions around measurements: one is safety of our users, two is transparency for them, and three is autonomy for them. So, I always ask these questions on every project.
In addition, I find dichotomy mapping really interesting, which is anticipating what could intentionally go wrong. We spend a lot of time considering how we’re going to benefit our users but what do we anticipate by creating this product or intervention that what could be harmful. What are the downstream impacts?
Look at Uber and Lyft, for example. People can move around much quicker and easier. There’s new infrastructure for transportation, now, and we’re democratizing it, but what has it disrupted? Wages, traffic, equity, safety, etc. It’s important that we think about all of these different tools as ways for us to really be conscientious of the impact on individuals and communities.
Nathan/ This is great because theory of change is essentially strategy, right? It’s no different than how companies do strategy, which is they have a theory of change in the market or in their sales, and that theory needs to be tested. When you work with companies, how do you go about testing their theory of change? As you know, mission statements say all sorts of wonderful things but that doesn’t necessarily make it so. How do you validate that the theory of change is possible or correct and where do you put it in the process?
Minnie/ I like hypothesis-driven development because it’s a great way to put a theory of change or a mission into practice. It’s not dissimilar to the scientific method and I love the idea that the scientific method can create some degree of rigor in our industry. This idea has four stages. The first is the hypothesis: what problem do we think we’re solving, We have to be really clear what it is and not change that hypothesis throughout the course of the experiment.
The second stage is the evidence. You need to know that this hypothesis is or isn’t working. If we think we’re going to make our customers’ lives easier by creating something, okay, how will we know that their lives are getting better? What evidence so we have for their current lives and what will we look to after our intervention?
The third is data and its collection. What kind of data do we need to measure? Is it quantitative, qualitative, both? How are we collecting it? Pre and post surveys, work product analysis, interviews, something else? They are so many ways we can collect data in this day and age but not all data is equal—nor their collection methods.
Fourth, what is the action? How are we planning to use that data as evidence of our hypothesis working (or not)? I think this is a concrete way to look at something like a mission statement or a theory of change and to create these feedback loops to validate action You may need three or four hypotheses, but you have to have that degree of rigor for each and every one of them.
Nathan/ How often is that rigor engaged in your experience? For example, did Facebook or Instagram do that testing at any point? Should they have? Then again, they likely didn’t have any theory of change other than making money.
Minnie/ This is one of the challenges in tech. To quote Spider-Man, “With great power comes great responsibility.” First, you have to care about things like teen mental health. If so, you can create hypotheses and then look for data. For example, if we put-out face filters for photographs, our hypothesis might be that they’re going to be playful and create new opportunities for people to have fun in their communities. But, we then need to collect the data.
But, let’s say that the data we collect shows that they’re actually creating dysmorphia around appearance and reducing self-confidence. Whether it’s a big company or a small startup or anywhere in between, you have to do something with that. It’s a liability if you sit on that data. That’s why it’s so important that there’s this action plan associated with your hypothesis. If we set-up these experiments and then know what the outcomes, what will we do with the information once we have it. Facebook made a giant mistake a couple years ago and like got sued because they found all these outcomes and did nothing with that information.
The design thinking process has, historically, been a little cavalier, right? We brainstorm and come-up with these ideas and then we test them—hopefully. So, when you come of the synthesis phase and have three to five opportunities, that’s when you should apply the hypothesis process. Like, here’s the opportunity. It’s not yet a real opportunity. We did this brainstorming and research to identify potential opportunities but we haven’t validated that they will have the outcomes we want. If we don’t validate them in some kind of process, they’re merely just cool ideas. It’s particularly important for systemic challenges, things that will launch to many, many people and will have long-term, downstream effects.
Nathan/ Do you see that, ultimately, it circles back into customer research?
Minnie/ Yes, definitely. Customer research is one way of validating or invalidating something that is a hypothesis? It’s really important for people to combine qualitative and quantitative data and use a mixed methodology. Otherwise, it’s too easy to find only explicit outcomes and miss the implicit ones, and confirm our assumptions with just a little data but miss the real story.
Nathan/ How do you deal with pushback from either peers or people above you in an organization? Clients that say, “Eh, we don’t really need to do this” or “I don’t believe that the impact is really like that” or “you’re exaggerating the outcomes” or however it might show up?
Minnie/ One interesting research study showed that Gen Z customers put more emphasis on transparency and honesty for organizations. I will share with stakeholders that we are not living in an era where you can hide anymore. We’re living in a social, information age. So, if we decide to forego user safety or emotional wellbeing or the mental health of the people that we’re serving, that’s not going to stand for long. Cancel culture is real. A few false moves and people will boycott your product or service. We saw this when Uber turned-on surge pricing when there were suddenly immigration issues at JFK airport and they charged four times the normal amount to lawyers trying to get out there and help the immigrant community. I think they lost 25% of their users in the New York area that day. That decision had a very concrete impact on the bottom line, whether it’s sooner or later.
At Pinterest, we’ve seen the increase of Gen Z users more than any other demographic and they describe that feel that it’s a safe space on the Internet where they don’t feel bombarded by weight loss ads and like—all of the things that like contribute to poor mental health. I really do think that younger generations are going hold organizations accountable to things they never before anticipated.
Nathan/ You mentioned measuring impact many times. There are a ton of frameworks for measurement:. ESG (Environmental, Social, Governance), CSR (Corporate Social Responsibility), now we have DEIB (Diversity, Equity, Inclusion, Belonging), SROI (Social Return on Investment) and SIA (Social Impact Analysis) are old ones, etc. How would you counsel someone that says “Fine, show me the numbers!” How do you measure all of these impacts?
Minnie/ I’m not the expert in this area, but I try to point people to case studies and examples of where we’ve seen businesses thrive because they put their users first and where companies have failed by mis-stepping. I teach a class around designing for inclusion and there are so many examples where representation matters. When we create products and services that are inclusive of a broader range of society, that increases the number of people engage. When you don’t test face filters or other features with people with darker skin, it obviously won’t work for them so you’ve reduced your engagement numbers. Aside from it being a socially blind move, it’s a dumb business move, as well. If they don’t see the social value, show them the loss in business value that decision caused.
Nathan/ Social impacts are the hardest to measure, partly because not everything is measurable in numbers. How do people build a scorecard for determining whether they’re meeting their objectives and making things better or not?
Minnie/ It’s so different for every organization. ESG is the standard but it’s under attack, at the moment. In those three categories lie most of the specific impacts. So, it’s the most widely applicable. All of the UN’s 17 Sustainable Development Goals fit within those three categories, too, so they feel pretty inclusive. However, that doesn’t help you define the actual impacts you want to focus on nor how to measure them. That’s work you need to do yourself.
Nathan/ One of the most valuable things that you provide for our students is an understanding of trauma-enforced research or “careful research.” As a design researcher, you’ve had lots of experiences that has informed this. Can you speak about that journey?
Minnie/ Let me see if I can do this succinctly. Historically, in the time that I was learning the process of human-centered design, designers had a bit of an inflated ego about their process. Many were discussing being a design-led organization and the term innovation was everywhere. We thought of design a sort of silver bullet process that was going to help businesses and companies innovate.
That tipped a lot of influence to the design community, sometimes at the expense of the people we were designing for. Liz Jackson talks about “pathological altruism,” which is the blindness that well-meaning people have in seeing the impacts of their actions, even if genuinely good. Basically, designers would go into a situation believing “We’re doing human-centered design. We’re actually talking to the people who use the products.” But, sometimes, they were just stealing the ideas and things that already worked for them, selling them back to these giant companies and calling it “design thinking.” Designers were inadvertently repacking local knowledge. Then, there’s design voyeurism, where the goal is to get into people’s homes and learn about their processes and their how minds work. And yet we’re putting them in a really vulnerable place because we’re asking very personal questions and we’re not giving them a lot of context as to how the information they share with us is being used.
I ran into it myself. I traveled a lot to emerging markets. As a white woman, I would show up and explain “I’m going teach you this process.” And, correspondingly, I centered around myself as a designer and not the people I was trying to serve.
Now, I try to create as much transparency as I can and I think of the design research process as more of a dialogue. When you’re working with vulnerable communities, it’s especially important to be transparent, find common understandings, and treat them as partner in the process.
There is co-design and participatory design, but it goes a step beyond these, to capacity building. I try to connect what people share to the results, like an idea on a Post-It™ that became part of a prototype, so I can show people their impact and the value they shared. But, more importantly, I want them to see their part in the process and learn that process as a skill to employ if they want to. As a group of practitioners, we need to be conscious of how we enable the people we’re designing for and with. They should feel like they’re at the center of the ideas and that we’re a set of hands to help them.
This interview is from A Whole New Strategy: Everything You Need To Think And Act More Strategically
Recent Comments