Pieta Blakely, PhD helps mission-based organizations measure their impact so that they can do what they do well. She started her nonprofit career as a teacher in workforce development and adult basic education. It was important work and she was worried that they didn’t really know if they were doing it well. In the process of trying to answer that question, Pieta got a Masters in Education and a PhD in Social Policy, and became an evaluator.
Pieta has been an evaluator for over fifteen years, the past five of those as a consultant helping mission-based organizations use evaluation to build better and more effective programs. She believes that evaluation isn’t a test, it’s an ongoing process of trying things, measuring the results, and making adjustments. Her goal is to help build organizational cultures that thrive on joyful accountability and doing important work well.
Pieta is known for explaining complicated things clearly, an emphasis on ethics and justice in evaluation, an understanding of how not-for-profits work, and her unpredictable efforts in vegan and wheat-free baking.
Eli Holder is a dataviz designer, researcher, and founder of 3iap, a data visualization design firm. 3iap (3 is a pattern) specializes in psychologically effective information design, approachable analytics, and developing human-centered data products. If you’re a data designer, journalist, or analyst, Eli’s Equity-Oriented Dataviz Workshop can quickly teach your team how to visualize data on inequality, without reinforcing inequality. This covers not only his recent research, but also the underlying psychology and alternative design approaches to conventional (harmful) visualizations of racial outcome disparities.
- Eli on Twitter
- Pieta on Twitter
- What can go wrong? Exploring racial equity dataviz and deficit thinking, with Pieta Blakely and Eli Holder
- Eli will present his paper (co-authored with Cindy Xiong) in October, at IEEE VIS 2022, but you can find a preview of the findings here: “Dispersion vs Disparity” Research Results: How masking uncertainty encourages stereotyping when visualizing social outcome disparities.
- Presenting data for a Targeted Universalist approach
- Pieta and Eli discuss data viz and equity: https://youtu.be/EcCRUXlgoOc
- Do No Harm Guide Project Page
New Ways to Support the Show!
With more than 200 guests and eight seasons of episodes, the PolicyViz Podcast is one of the longest-running data visualization podcasts around. You can support the show by downloading and listening, following the work of my guests, and sharing the show with your networks. I’m grateful to everyone who listens and supports the show, and now I’m offering new exciting ways for you to support the show financially. You can check out the special paid version of my newsletter, receive text messages with special data visualization tips, or go to the simplified Patreon platform. Whichever you choose, you’ll be sure to get great content to your inbox or phone every week!
Welcome back to the PolicyViz podcast. I am your host, Jon Schwabish. And on this week’s episode of the podcast, I talked to Pieta Blakely and Eli Holder about their work on equity and inclusiveness in data and data visualization. If you’ve been following my work for the last year or two, you know this has been a big topic of interest for me. I’ve written a couple of papers about this particular topic with more than I’m currently working on through the Urban Institute, including two volumes of what we’re calling the Do No Harm guide. So I hope you’ll check those out, and I hope you’ll check out the blog posts that Pieta and Eli have written that I linked to in the show notes page. So I hope you’ll take a listen to this episode, and I hope you’ll think about ways in which you can be more inclusive and more thoughtful and more strategic in the way that you talk about and visualize different groups across the world. So here is my conversation with Pieta and Eli.
Jon Schwabish: Hey Eli, and Pieta, good morning. Welcome to the show. How are you both?
Eli Holder: Hello.
Pieta Blakely: Great, thank you.
JS: Great to have you both on the show, excited to chat about all the great work you’re doing. I thought we would start by some introductions for folks who don’t know of you or your work, and then we can get into all the nitty-gritty of the content. So maybe Eli, you’d like to start – tell folks a little bit about yourself and your background.
EH: Yeah. I’m Eli Holder. I, a long time ago, studied computer science in school, did my first kind of scientific visualization, research and undergrad, got out of school, started a couple of startups; one, again, back to DataViz, looking at how can we take Fitbit data and personal health tracking and make it less clinical, make it more motivating [inaudible 00:01:54] part of the, like, emotional sides of DataViz. And then from there, a few miserable years as a product manager, working for kind of data oriented products. And then, 2020 happened, I think we all became a little bit more introspective, retrospective, and realized that the DataViz side of data and the storytelling and the psychology of it were just so much more compelling to me than really anything else. And so, 2020, I started working on my own practice, 3iap, kind of, in earnest. And so, since then, have been doing client projects around either communicating data, storytelling kind of things, or data products, and how do you design both of those in ways that are not just informative, but psychologically effective that create the outcomes that we want to see. And yeah, so that’s where I’m at, and part of that is I enjoy the work, it’s also an excuse to do kind of side projects, and this one turned into what I thought would be maybe like a two or three-week exploration, and it’s now turned into a year-long research project. But that’s part of the fun, that’s why I’m here.
JS: It’s so funny, because I talk to so many people on the show, and folks who are in DataViz kind of gravitate towards computer science, because that’s a lot of the part of making data visualization. But like, you started with computer science, and then, kind of like gravitated towards psychology, which is kind of like the other pair of it, which is pretty interesting. That’s great. So Pieta, could you talk a little about yourself, and then how you and Eli kind of teamed up on these projects?
PB: Yeah, sure. So I am an evaluation consultant, I work for mission based organizations, and I help them measure their outcomes; and all of the work that I do has to do with some kind of marginalized or disadvantaged population, and is aimed at building thriving urban communities, because that’s what I care about. So under that umbrella, there’s a lot of youth work, local community development, a little bit of healthcare, things like that. And I’m often collecting data, creating visualization, and then talking to people about it, because the goal is always, so how can we use this to do better, how can we run our programs better. And so, I had a thought one day about targeted universalism and data visualization, and I thought, how would I set up my data visualizations to support a conversation rooted in targeted universalism. And I wrote a blog post entirely based on my thinking, as its emerging. I was like, here’s what I think I would do, and I put it out there. And you picked up on it, and Eli picked up on it, it was really exciting. And Eli reached out to me and said, I am trying to implement some of these things you’ve written about, and I’m getting some pushback, and so, he – which was great for me, because I had just been thinking about myself about this. [inaudible 00:05:06] asking me these questions, he really got me to articulate some of the assumptions that were in that initial piece. And so, we decided to take some of our thinking as it was just getting so much like richer through us being in conversation about it, and write a follow up blog piece; and he’s really taking some of the ideas that I had just thought of, and he’s starting to create some experiments and actually demonstrate how these things work in the real world and validate some of that. So that’s been really, really cool to see.
JS: Yeah, that’s great. So I want to get to the experiment part, but I want to make sure that folks are sort of on the same ground, I guess, so we all start at the same place. So I was hoping that you could start by just defining for folks targeted universalism, and that’s a big phrase. And then, I think we need Eli to define deficit framing, because that’s another big part of that [inaudible 00:06:07].
PB: Yeah, sure. So targeted universalism is an approach, I’ve seen it a lot in education, but I think it applies in all kinds of fields, where we have one universal goal for everybody, and then, targeted approaches to help different populations reach that goal. So it’s a balance between a universal goal and a universal approach, everybody gets the same thing. And then, if you don’t do as well, well, we gave you what everybody had, right? And targeted programs, which focus on particular populations, and then some people say, well, this is not fair, different people are getting different things, whatever those critiques are. So targeted universalism says, look, we’ve got high standards for everybody, and then, some populations have specific barriers, and we’re going to create targeted programs to make sure that they can reach the targets that we’ve got for everybody. And so, that was what was in my head when I started those first set of charts.
JS: And so, you are thinking about this in the context of evaluations?
JS: And so, you’re thinking about if some, like, if a firm or a researcher is doing an evaluation, how do they, for example, target a particular policy?
PB: Right. Yeah, how do they target, like, particular populations, could we say, hey, are girls doing really well in this program; do boys have some different barriers to accessing this material; do students who are speaking English as a second or third language have different barriers to getting the best out of this program than other people do – that was the kind of thinking.
JS: Got you, okay.
PB: I think that there was an assumption embedded there that I did not say explicitly in the blog, and that, like, should be said out loud, which was, I thought that you had to design your data visualizations to support different kinds of conversations, and that people looking at the visualization could have different conversations based on how you design that visualization. That’s the important thing. But I omitted from the initial [inaudible 00:08:28] right?
JS: Right. Okay, so now we’ve got deficit framing, sort of, I don’t know how they fit sort of together, whether deficit framing subsumes, but anyway, Eli, I was hoping you could give people sort of a framework for deficit framing.
EH: Yeah. So deficit framing is a phrase that I kind of made up, or that came out of the paper, it refers to a framing effect on a chart that leads to deficit thinking. Deficit thinking I think is the more common term, and it’s a term that I wouldn’t have known about, if it weren’t for Pieta’s post. I’d never came across it prior to this. I think it’s used a lot more in the education space, which I’m not as close to. But what it describes is this tendency to generally think about outcomes for minoritized groups only in relative terms to outcomes for majority groups, or for groups with better outcomes. And one of the kind of main harms that comes from it is this tendency to kind of conflate the outcomes with qualities of the people, so to blame outcomes on the people being visualized, as opposed to more systemic reasons or more external factors. And so, it’s kind of a nasty form of victim blaming, essentially, coming out of it. I think I’ve been working with a pretty narrow definition of it. There’s certainly more to that concept, but for the purposes of, like, how does this work with charts and graphs and data design, I kind of latched on to the part around personal attribution and how certain charts can lead to that.
PB: Yeah, I think that’s really the key thing that, you know, how do we avoid looking at a disparity or a difference, and then blaming people. I remember all these racist narratives that have always been used to explain different outcomes.
JS: Right. So the example that you sort of anchor both of these posts, is, and I’ll just describe it here so folks can get it in their head, and I’ll let you all dive into it, but it’s basically the percent of students who achieved some threshold and a test score.
PB: [inaudible 00:10:50].
JS: Right. And so, you’ve got this line chart, and you’ve got four different lines for four different racial groups, white students, black students, Asian students, etc., etc. And so, the argument that you both make in these posts, and similarly, an argument that Alice Feng and I made in the Do No Harm Guide is like, should that be one graph or should it be multiple graphs. And so, I’m going to leave that as the context, so people who haven’t seen it – well, they should read it – have this one graph versus these sort of smaller multiple graphs. So maybe, Pieta, I’ll let you talk a little bit about it, and we can chat through it.
PB: So there were two reasons why I thought it should be multiple graphs, one was, in that initial graph, there is no target for everybody. And so, you end up with the white group becoming the benchmark, and that centers whiteness in ways that I think are really problematic. The other thing was, it leads, I’m obviously reading it as a comparison between the groups, right? I’m going to say, well, yeah, the black students are not doing as well for the white students, which just leaves this perfect sized space to think something like they’re not trying as hard.
PB: You know, like, just fill in with assumptions and bad things, right? What I wanted to demonstrate was the space between each group and where we want them to be, which allows for, I think, just a much more nuanced conversation and actually think about each group individually, and what might be their strengths and their barriers.
JS: Yeah, so the one graph where they’re all together is what is the, I mean, depending on the context, what is the highest group or the lowest group, and is that the baseline, is that the goal. And that sort of centers, in this case, centers on white students who scored the highest.
JS: So I wonder then to take a next step, when you think about, let’s say, there’s four groups, how do you think about ordering or aligning those four next charts?
PB: Oh, that is a really good question, which I did not think about at first iteration. And I think the best answer I have, like, at this point it’s probably alphabetically.
JS: Yeah. I mean, I don’t know, Eli, you have any thoughts on this, but this to me is like one of the big challenges.
EH: So the ordering effect, I think is definitely important, and I think if you can start with – so the context for me is I’ve done this in reports where there are just multiple, multiple kind of different cuts of data involving race or gender or things like that. And so, if you can establish that early on, all these groups are ordered alphabetically, and that’s just kind of the rule across the board, you’ve at least established, like, here’s the reason for it. It doesn’t always work, so there will be cases where it does have like a centering effect, where it doesn’t feel good to have certain groups kind of like at the top or at the bottom. And so, for stuff like that, I’ve never been shy about, like, you know what, I’m going to reorder it, so that the group that needs the most attention, or the group that’s most relevant to this conversation is kind of front and center. But I do want to push back or dive into the question of separate charts versus [inaudible 00:14:30] and we can get to that in a second. But I think there’s a little more maybe nuance there that’s kind of worth unpacking too.
JS: Yeah, absolutely. I think you’re right, the thing about breaking the chart up is that the alphabetical ordering is sort of, like, objective kind of, you pull away all the other stuff, and even if you order it by value, you can do that by, say, magnitude whatever it is, it’s still an objective ordering, but it’s not putting them all together on the same graph, so that you have this sort of competing effect. And so, I think from my work, it’s like, I’m just going to be upfront about how I’ve made this decision, and that’s the decision. But yeah, so let’s dive into this single versus multiple panels.
EH: Yeah, so I think to one of Pieta’s kind of original points around this, the centering effect is definitely a good reason to separate, and I think for the purposes of goal setting, the majority group will tend to be – can tend to act as a benchmark. And that’s not always good, because who’s to say that the majority group is the best outcome. And so, I think those are good reasons to separate where you need to have, you need to show different goals and kind of create that emphasis. But the harms that come from blaming people individually or blaming people personally, these personal attributions, at least, a big chunk of that isn’t actually – it doesn’t seem to be related to whether they’re separate or not, it’s closer to the way that you present it even on the same chart. So in the research that I was doing all the different variations of charts that we tested, showed the outcomes on the same chart, but we varied the way that we showed them, and we were able to, even on the same chart, significantly reduce the people’s tendency, viewer’s tendencies to make these kind of personal attributions, it’s essentially stereotyping. And so, I think the benefit of having them in separate charts is it does, it takes emphasis away from these direct comparisons, which can be harmful, but there’s ways of doing that, that don’t require necessarily splitting it out too. I think for stories where it’s maybe high risk, and you’re worried that it could be misperceived, for stuff like that, I’ll lean pretty heavily into like any trick I’ve got to water down the differences or make sure that – not water down, but, like, to clarify the differences. I’ll do that, but it doesn’t – I don’t think that it always needs to be the case that you need to separate, if you have other options to make sure that people don’t kind of fool themselves into these personal attributions. [inaudible 00:17:18] what those are, if that’s helpful too.
JS: Yeah, well, I was just going to ask, like, is that a visual component in the graph itself, or is it the text that you use in and around the graph, or probably it’s going to be both, but?
EH: It is definitely both. One of the big takeaways is understanding that, by default, a lot of people jump to these blaming tendencies. This isn’t like a new concept, this is something that we all learned about in Intro to Psychology, fundamental attribution error, correspondence bias, just as people, particularly like as Americans or people that live in individualistic or Western cultures, we jump to personal blame, much faster than we jump to looking to external reasons for any kind of outcome or behavior. And so, if you kind of take that as a premise, and you assume that given any chart, people will tend to – a lot of people will tend to explain it in terms of these personal attributions, that’s the main thing that we’re trying to solve for. And so, you can go for that in a couple of different ways. I think the annotation layer and the text around it, it’s separate from the research project that I was doing, but I think is actually still probably even more of an important component. As much as you can do to frame it in terms of these are – the outcomes that you’re seeing could likely be caused by these external factors, X, Y, and Z, or systemic factors, X, Y, and Z. A lot of research that looks at misperceptions around causality show that if you can provide people with alternative explanations for what they’re seeing, then their tendency to jump into superstitions or other kind of false conclusions around causality diminish, just by giving them alternative explanations. And so, annotation layer and titles are a great place, I think to do that. And then, within the charts themselves, what we found was that it has a lot to do with showing variability. The charts that tend to be the most problematic are charts like bar charts, or dot plots, or even confidence intervals. These are all charts that really emphasize the average group outcome, and they don’t show anything about kind of the variability of outcomes within a certain group. And so, for something that’s as loosely defined as race, for example, you will almost always have a lot of people that – so if you’re looking at something like earnings for race, you will always have a lot of people in any group that earn very little, and you always have a lot of people on any group that earn a whole lot, even if the average outcomes for those groups are different. And so, if you show the – by showing the variability of those outcomes and making sure that it’s clear, using something like a jitter plot or a prediction interval, you can help people see that there is not only a lot of differences kind of within groups, but the differences between groups aren’t actually as pronounced as something like a bar chart makes it seem like. And so, it’s much more obvious that something like race is not a good predictor of outcomes or something like income, because you can see that how much variation there is, within any given group for whatever that outcome is.
JS: Right. So I’m curious then, Pieta, in the folks that you work with, who are probably, I would guess, are generally more accustomed to line charts, bar charts, pie charts, like, that world, have you had experience of telling people like, yeah, let’s try a jitter plot or a beeswarm chart or something like this, like, what is their reaction – you know, with this framing, because we’re trying to do this, what is their reaction to stuff like that?
PB: Yeah, I mean, and I think because Eli does even more of this work, he’s gotten even more pushback. But yeah, definitely, people are like, oh, but this is just how we’ve always looked at it, these are the charts that people know how to read, yeah, absolutely. I think, and my data visualization work is not as sophisticated as some people, but yeah, I get tons of pushback, just like basic improvements…
JS: But it’s interesting, because I’m guessing that you’re talking to your clients or partners, and you’re saying, we want to take this more, you know, this different framework where we’re not ranking or implying, or this or that, and so, here’s an alternative to that. But I’m guessing they’re still pushing back, and they probably agree with you on the framing piece, but they’re still pushing back because, well, it’s a bar chart. Right?
PB: Right, yeah, exactly. Yeah, this is what we know how to do.
PB: I think Eli early on came across a much more interesting objection.
EH: Yeah, this is exactly how we personally were talking.
PB: Yeah, the level of objection that I’ve gotten is just, oh, this is what we’re used to, you know, can’t it all be pie charts, this is what we know how to read. The objection that Eli came across, which I think is way more interesting and important is, oh, but everybody knows, right? We just need to point out the disparity because everybody who reads our chart knows the disparities are caused by structural racism. That’s a really interesting problem, because the answer is no, you can’t – you can never assume that your chart is only going to be read by people who read charts or visualizations from the same perspective that you do. And so, part of this is like sending your work out into the world in a way that is complete, right, the way [inaudible 00:23:09] guards against those kinds of readings.
JS: Yeah, I’m also curious, we’ve been talking about race and ethnicity, and that was sort of the grounding of the posts. I’m curious if you’ve done any thinking about other characteristics, I mean, internationally, it’s certainly like ranking of lower developed countries versus developing versus developed, there’s issues around gender and sexuality and all these things. So have you thought specifically about these other groups, or is it sort of right now your thinking is kind of evolving to spread out?
PB: I think it applies to anything where you have groups of people, and there could be stereotypes about those people. So I’ve been thinking about road safety and even zero, which a lot of cities have a target of having zero road deaths. Does this apply, like, do we need targeted approaches for pedestrians, people with physical disabilities, cyclists, drivers? Are there stereotypes about cyclists, or e-scooter users? Yeah, definitely. Yeah, I think anything where you think, oh, in the absence of data, people are going to, like, there’s a story that fits in here. Right? Yeah.
EH: The driving one and the road safety one is one that I’ve started noticing a lot recently, like, so now that I kind of have a clearer understanding of how pervasive the personal attribution bias really is, like, I see it everywhere, and I can’t unsee it, and it’s…
EH: It’s actually crazy in that way, but it’s also like, it’s pretty eye opening. And I think one of the topics that is just full of victim blaming is road safety. So there was an infographic that came out like a few years ago around, like, relating driving time to obesity, and I think the title on the infographic was driving is making you fat. But in reality, it’s not – in the United States, outside of maybe like three or four cities, you don’t have a choice but to drive. It’s not the driving that’s doing, it’s not your personal choice. It’s like years and years of suburban sprawl and years and years of kind of transportation policy that forced you into cars.
PB: And that’s also got something to do with the cost of housing that’s proximate to your workplace, which makes cycling to work not a feasible choice, and that your lower income status is also influencing your access to outdoor space and healthy food.
JS: Yeah. So I wanted to finish up by asking what you think, folks in the DataViz field, and I’m pausing here, I’m just thinking for a moment, because I’m saying DataViz field, but I think it’s actually broader than that. So I’m going to stick with DataViz field, but I think it’s broader than that. So let me just say people then.
PB: People who write about numbers.
JS: Yeah, right, what do you think people who write about numbers and their organizations, because I think that’s the other piece that it could be really, for example, Pieta, I mean, I think your experience is probably like the prime example – you could be all about trying these other methods and pushing your clients and you can imagine in an organization having the same pushback that you’re getting, well, we always make bar charts, and that’s it.
JS: So what do you think people should do, what is their first step to put the lessons and the stuff that you’re talking about and writing about, what should their first step be to put this into place?
PB: I think we do not think enough about how things like data visualization should fit with the overall philosophy and guiding principles of our organizations. Right? I think, one, we assume that like charts are just neutral, and that they’re numbers, so they must be true, and however they come out of your software is fine. We never think like if our organization has these beliefs and tenets about how we operate and how we treat people. I think a lot of organizations recently are starting to think like, well, that has effects for what words we use, how we write about people in text. Well, the next step is, how then does it affect how you write about people in charts and graphs. Right? I don’t have an answer for you, I want to say, like, that is a conversation to start having in your organization to start thinking about, like, this matters in the same way we could do harm with our words, or be more careful and thoughtful with our words. We can do it with data visualizations too.
JS: Yeah. Eli, any thoughts on this getting started piece?
EH: Yeah. So I think there’s two things, so one is, and you’ve kind of hinted at this, you can maybe expect pushback on, like, what is a jitter plot; and a lot of that, I think, stems from this false notion of like we need simplicity, we need it to be as kind of like simplistic as possible to reach a wide audience, and I think that’s a false trade-off. I think you can, with the right designs, you can still get the message across. But what you need to remember is that with these more simplistic designs, you’re creating awareness, but you’re not necessarily creating awareness around the right thing. So it’s not enough to teach the whole world that there are these wide disparities between different groups. The actual goal is much closer to, we need to teach that these disparities exist, because of these external factors, because of these systemic factors. And so, you can kind of fool yourself into thinking that something like a bar chart is better because it will reach a wider audience, and maybe it will. But is it reaching the audience with a message that that will actually help solve the problem or not? And the other one is more targeted towards designers and people that do this in general is I mentioned this before, but realizing how deeply embedded this concept of personal responsibility is within just kind of the American mindset, it creates this trap when you’re exploring data or you’re trying to figure out, like, what causes these outcome differences, and it kind of tricks your brain into stopping at answers that just aren’t as enlightening or compelling or interesting, much bigger thing is going to be some typically some external factor. And so, if you stop yourself from this habit of immediately jumping to blame, and stopping at blame, and look, and just ask yourself, okay, what are the external factors, what are the systemic factors, you’ll get to much richer answers, you’ll get much more interesting answers and much more enlightening answers I think for you and your audiences. And that I think is what would be my main ask for anybody communicating numbers is think about the external factors, try to make sure that your audience is thinking about the external factors.
PB: I think for so many years, like, we’ve looked at these disparity charts for years and years. We have not solved the disparities yet. So that’s probably not…
JS: Right, still important to do, yeah.
PB: Yeah. Right. That’s probably not the most useful visualization. I think Eli, your point, like, how do we set up these visualizations to help us really think deeply, like, in much more creative ways about these issues is, I think, that’s really important.
JS: Yeah. Terrific, Pieta, Eli, thanks so much for this conversation and the work. I look forward to seeing where you go with it. There’s a lot more to do, I’m sure.
PB: There’s a lot more to do. I mean, this is really just the beginning of a conversation, and we’re inviting everybody to join in.
JS: That’s great. Well, I’ll link to all the blog posts and everything on the show notes, so people should check them out. But yeah, thanks to both of you for coming on the show, really appreciate the chat.
PB: Jon, thank you very much for having us.
EH: Thanks so much, Jon, appreciate it.
Thanks for tuning in to this week’s episode of the show. I hope you enjoyed that, maybe learned a little bit, thought about some strategies you might like to implement in your work, in your visualizations, in your data analysis. And be sure to check out all the links in the episode notes page, there’s a bunch of papers and blog posts there, I hope you’ll check them out. If you want to learn more about this, you should check out some of the resources at the Urban Institute website, lots of different toolkits, and other things that you can use in your own work. So until next time, this has been PolicyViz podcast. Thanks so much for listening.
A whole team helps bring you the PolicyViz podcast. Intro and outro music is provided by the NRIs, a band based here in Northern Virginia. Audio editing is provided by Ken Skaggs. Design and promotion is created with assistance from Sharon Sotsky Remirez. And each episode is transcribed by Jenny Transcription Services. If you’d like to help support the podcast, please share and review it on iTunes, Stitcher, Spotify, YouTube, or wherever you get your podcast. The PolicyViz podcast is ad free and supported by listeners. But if you would like to help support the show financially, please visit our Winno app, PayPal page or Patreon page, all linked and available at policyviz.com.