Episode #185: Arathi Sethumadhavan

Arathi Sethumadhavan is the Head of User Research for Ethics & Society at Microsoft’s Cloud+AI, where she brings the perspectives of diverse stakeholders including traditionally disempowered communities, to help shape products responsibly. She currently works on AI across speech and computer vision as well as mixed reality. Prior to joining Microsoft, she worked on creating human-machine systems that enable people to be effective in complex environments in aviation and healthcare. She has published ~40 articles on a range of topics from patient safety, affective computing, and human-robot interaction, and has delivered ~50 talks at national and international conferences. Her book Design for Health: Applications of Human Factors  released early this year. Arathi has a PhD in Human Factors Psychology and an undergraduate degree in Computer Science.

We talk about why it’s important to have teams like Arathi’s in organizations and how to use data ethnically. We also talk about how her team brings perspectives from other people and communities into their work.

Episode Notes

Arathi on LinkedIn

Responsible AI at Microsoft

TED Talk: Carole Cadwalladr, It’s not about privacy–it’s about power

Wall Street Journal | Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case

Fortune | Amazon Reportedly Killed an AI Recruitment System Because It Couldn’t Stop the Tool from Discriminating Against Women

American Psychological Association | The ethics of innovation

Related Episodes

Episode #68 with Randal Olson

Episode #179 with Kandrea Wade

Support the Show

This show is completely listener-supported. There are no ads on the show notes page or in the audio. If you would like to financially support the show, please check out my Patreon page, where just for a few bucks a month, you can get a sneak peek at guests, grab stickers, or even a podcast mug. Your support helps me cover audio editing services, transcription services, and more. You can also support the show by sharing it with others and reviewing it on iTunes or your favorite podcast provider.

Featured image by Gertrūda Valasevičiūtė on Unsplash

Transcript

Welcome back to the PolicyViz podcast. I am your host, Jon Schwabish. I hope you were all well and healthy. Last week was a surprise episode of the podcast where I talked to Robert Kosara and Alvitta Ottley about the IEEEVIS annual conference that took place just a few weeks ago. So I hope you had a chance to listen to that. You should also check out the Data Stories podcast that was released a few days later that also cover the IEEEVIS conference. As with any conference like that, it’s not going to be covered in any single or pair of podcasts. So there’s a lot of great information that you should consider checking out over at the IEEEVIS website.

So on this week’s episode of the show, we turn back to data, and data visualization, and ethical use of data. And I’m happy to have Arathi Sethumadhavan on the show. She is the head of User Research for Ethics & Society at Microsoft. And I found out about her through a LinkedIn article that was very interesting on the work that she and her team are doing over at Microsoft. Arathi has published over I think 40 articles on a range of topics: patient safety, human robot interaction, affective computing. And she also has a book designed for health applications of human factors that came out early this year.

So we talk about in this week’s episode, we talk about her work, talk about our team’s work, and what it means to think about ethics and society as it relates to data, as it relates to, well, products at Microsoft, and as it relates to artificial intelligence, which is clearly going to be and is becoming a major force in all of our lives, especially those of us who are working in data and those of us who are working in the field of data visualization.

So I hope you’ll enjoy this week’s episode of the show. And here is my discussion with Arathi.

Jon Schwabish: Hi, Arathi. How are you? Thank you for coming on the show and taking time out of your day.

Arathi Sethumadhavan: Well, thank you so much. I’m really happy to be here.

JS: I am excited to chat with you. I have been reading about you and your work. And it sounds very exciting. And I want to learn more about it, because the ethics around data, product development, communication is certainly important, especially in the moment that we’re having here in the United States. So I thought we would start with have you maybe talk a little bit about yourself and your background, and the team that you’re leading at Microsoft. And then we can get into talking about what the role is, and what the team does, and how you work with all sorts of folks over there at Microsoft.

AS: So my background is, my undergrad is in computer science. And I grew up in India, and then I moved to the United States for grad school. And my PhD work is in engineering psychology. So essentially, I study how people interact with complex systems. And most of my work in my grad school days was focused on aviation. And then I moved from aviation to another safety critical industry, post graduation, which is healthcare. And I worked on medical device product development for the longest time. And fast forward to today, I’m on a team at Microsoft, within Microsoft’s cloud and artificial intelligence business, and the team is called Ethics and Society.

JS: So, Arathi, when it comes to the work that you do, what do you mean by ethics in terms of the work that you are all doing?

AS: Ah, I’m glad you asked that question. Ethics means a few different things to me. Ethics is a responsibility to understand and respect the values, needs, and concerns of end users and other impacted community members, including those who are not our direct paying customer. Ethics is being proactive and not reactive, right? So this means considering potentially harmful consequences of the technology, and mitigating those prior to release. And that actually can result in bigger trust in our brand too. Ethics for me is also translating principles to everyday work. So what I mean to say that it’s very important to have role based training and tools and practices that engineering teams can use to translate these principles into practice.

And lastly, I do want to say one thing, I view of epics as innovation, embedding a multidisciplinary team and using multidisciplinary approaches, I think is really, really possible to create exceptional products, services and tools for our customers. So this can actually be a competitive advantage. So, ethics doesn’t have to be viewed as one compliance thing that you have to do. But instead, it can actually be a competitive advantage for you.

JS: Yeah. I mean, it’s fascinating. I guess, it shouldn’t really surprise me. But the group itself is not that old. Right?

AS: That’s right. The group started in its current state during April of 2018 or so. So we’ve been around little over two years. But manager has been leading a similar sort of a team. It was at that point called Biz AI and Ethics. She had that prior to this particular team being formed.

JS: Can you talk a little bit about like, how big is the team or the folks on the team? Are they computer scientists that you have a PhD in engineering? Psychology is a whole other conversation we can have at some point, I’m fascinated. What are the background of the folks on your team?

AS: Yeah, that’s a great question. We’re actually quite a multidisciplinary team. And that’s intentionally so because we believe that we can innovate responsibly, if we are able to bring diverse perspectives, right? And so that we are able to challenge these dominant views. So, therefore, our team comprises people like me. So I lead the user research discipline within the team. And my role is really to bring the perspectives of impacted community members into shaping products. We also have designers, and project managers, and engineers. So it’s quite an interdisciplinary team. I would say about 30 or so is our team size at the moment.

JS: Wow! Yeah, that’s pretty sizable. So this moment of responsible development of technology, it has, I think, taken shape and hold at several organizations around the world. And I was hoping that you could talk about why this is important, especially right now in the, in the conversation that we’re having around the country.

AS: Yeah, I mean, there are obviously really remarkable and well-intentioned applications of technology, right, especially if you think of AI and other emerging technologies. I mean, AI is transforming a lot of major industries that you can think of, from healthcare, to agriculture, to transportation. But there’s a flip side, right? And that is that lot of these technologies are being deployed with very little assessment of the impact that these can have on individuals and societies. I don’t know whether you’ve seen this, John. But last year, there was a news article that came out where a voice deepfake was used to scam a CEO based in UK.

JS: Yeah, remember that.

AS: Yeah. Then you, you may have seen the article that came out on an AI recruiting tool that automatically categorize male candidates as being superior to female candidates. Of course, you know about the role of social media in terms of misleading vulnerable voters. I mean, there was a great TED Talk by Carole on role of social media on Brexit. You hear news about racial disparities in automatic speech recognition systems. We hear about facial recognition systems, and how that discriminates against certain groups, and so on, right? So, so the point here is that it’s very important to define the current and next generation of technological experiences with intention. And that’s why organizations are starting to pay a lot of attention to it.

JS: Because essentially, the argument that you’re making, right, is that these processes and programs are good for the bottom line, which is, I think, something that is an argument that more and more people are starting to make where it’s not just you don’t just do these, because you feel like you need to have more women on the board; you need to do these things, because having more women on the board makes you a better, more profitable company. It’s an argument, I think, that goes a long way. From your role at Microsoft, is it just on the products that are going outside the organization to, you know, what I can buy at the Microsoft Store? Or is it also embedded within the internal work, and also, I would think spreading to the culture of the work inside the organization?

AS: Ah, that’s a really interesting question. So, so here’s the thing, right? Of course, it has to manifest in the products that you’re building. But in order to do that, well, you got to have the right processes in place. And you got to acknowledge that it’s people who build these technologies, right? So it’s very important to have the right sort of organizational culture and mindset. And we do that in few different ways. We do that through developing role specific workshops for different disciplines. You know, we try to obtain leadership sponsorship for ethical product development. We also have to do a lot of work in terms of incentivizing ethics and making that a core priority in how people think about products. So that’s, that’s super important. Yeah, the culture is really, really important, because, end of the day, people create these products. Then it’s really bringing the perspectives of diverse individuals into product development, you know, and by that, I mean, talking to impacted groups, and community members, and really using that to challenge dominant views. And it’s really about pursuing principal product development. And luckily for us, we are in this phase where our leaders have created that for us, Microsoft published something called the Future Computer. It was a publication of Microsoft, where our leaders Harry Shum, Brad Smith talks about six ethical principles for AI, which is fairness and inclusion, transparency, privacy and security, reliability, and safety, and accountability. And we apply these principles when creating products. So these are all the things that you need to do within the organization. But you also have to realize that we are in an environment where things are constantly shifting, things are constantly changing; new regulations are emerging, you know, and national events, international events, all of these can even instill feelings of trust, could instill feelings of fear amongst people or end users, right, towards technologies. So it’s very important to take these into account and respond to new knowledge as it emerges as well.

JS: Mm hmm. I’m curious, as you already mentioned, you have a pretty heterogeneous group of people in the group itself. And I’m curious, you mentioned also that you talk to stakeholders and members of the community. And I’m curious, for the more quantitative people on the team, is that hard to do? This is something that I’ve been curious about talking about with people lately that, you know, people who are trained and tend to do quantitative methods and quantitative work, this idea of talking to actual people, talking to people that we study, and people that we communicate with, is a pretty foreign idea in terms of, you know, we don’t do that. We download data, we collect data, and we analyze it, but we don’t actually talk to people. So like, do you find that it’s hard for some people to do that? And do you find that by having this broad team with all these different skill sets, you’re able to, in some sense, kind of train people on how to be good at having these, these stakeholder meetings and outreach efforts?

AS: Yeah, yeah. So luckily, for us, we are a large company, right?

JS: Yeah. Yeah.

AS: So that sort of, I’d say the privilege of having different disciplines. So we don’t expect, you know, an engineer or a data scientist necessarily to go and engage directly with stakeholders. We do expect them to think through some of these questions and the benefits and harms that the technologies that they’re working on can have on these human beings, but we don’t necessarily expect them to do the direct engagement with the community, because that might not be their area of expertise. It’s a totally different skill set. So at Microsoft, we have human centered disciplines, like user researchers and designers. So this kind of owners, we put on those disciplines. So now for within the Ethics and Society team, I lead the user research discipline. And what my team does is just that, which is engage with the community. And we do that through a variety of qualitative and quantitative research techniques.

JS: So when it comes to this responsible product development, you bring together these, these different perspectives. So how does that ultimately inform? You talked about this a little bit, but I’m just curious how you take those perspectives and inform them the final product? And to expand on that a little bit, how do you convince the developers at other places at Microsoft, that these are components that they should bring into their work? I’ll give you a good example from my experience, which is, which is obviously very different. But you know, very early on in my tenure at Urban, I was trying to reduce the number of pie charts that were people recreating at my organization. So people are making pie charts with 12 slices in them. And we, you know, that’s not a good technique. But whenever I would try to argue that someone should not use that, they wanted evidence to support my argument. They’re researchers, so they want this, this sort of hard evidence to support my argument. So when it comes to the responsible product development, how do you bring the different perspectives of people that you’ve talked to in your team and convince your other colleagues at Microsoft to embody and embrace these, these concepts and ideas?

AS: Yeah, so I think the answer sort of lies in your question itself. I think there’s a lot of power in the actual perspectives of these different stakeholders. So like I said earlier, we use a lot of qualitative and quantitative research techniques to, to solicit the feedback from end users and other community members. And what we do is we conduct our interviews or large public perception surveys. We do community [00:15:23 inaudible] where we assemble and line up product teams with the impacted community members so that they can hear the perspectives of these individuals, you know, directly. This, these quotes are extremely powerful.

JS: So what do you mean by bringing perspectives in from other people into the work that you all do?

AS: Through a few ways we do this. One is as part of your product development process, ensure that you have considered a diverse pool of end users, most importantly, including end users who are typically forgotten or excluded. So this means that you’re intentionally going out and recruiting people from the LGBTQ plus community, racial minority groups, women, introverts, those with visual impairment, speech impairment, and so on. We believe that when we can address the needs and concerns of marginalized communities, we are able to better address the needs of a broader range of people.

The second point I will make here is think about your indirect stakeholders, in addition to your end users and other direct stakeholders. So these could be individuals whose jobs could be impacted by the technology you’re building, for example.

Third point here is to seek advice from domain experts and human rights groups, especially as you work in novel complex domains. We worked with experts on situation awareness, tech policy law, human rights on different projects, as we realize our expertise in some of these spaces may be limited, and these individuals can actually help us. When product teams hear firsthand the needs, and the values, and concerns of the community directly, then that’s really powerful. So that helps a lot. And the important thing to realize is that getting feedback is not like a onetime thing that you do. And then you call it a day. I’m talking about getting feedback from the community throughout your product development lifecycle. You know, if you think about it right from envisioning to defining the problem space, to prototyping and building to post-deployment, throughout all of these phases, you got to engage the community and learn. I mean, that’s the only way, you know, you can create a superior product.

Now, you asked about convincing. So the data speaks for itself. So that helps immensely. And two, I like to think that most people have good intentions in mind. So when they see data, it’s easy to persuade them. Three, I have to say that when we engage with different product teams, we have a formal handshake that happens with the leadership of that product team. People create products, and unless of the ways you change the organizational sort of mindset, it’s very difficult to do anything anywhere. We work very closely with product teams. We have this like strong interest, strong buy in on the kind of work that we are bringing to the table. So that immensely helps as well.

JS: Yeah, yeah. No, it’s, it’s interesting, because it seems like not only does it affect the product, it affects the culture of the, of the people that you work with. And then that continues through the lifespan of the product and into the next product or what have you.

AS: I do have to say, John, that when the teams that we partner with, when they see the rigor and the processes that we bring to the table, you know, we do end to end engagement with them. And that includes doing harms modeling sort of exercises to anticipate what can go wrong with the technology and what could be the impact to different stakeholders, from that to the research with the community, right? It could be qualitative research activities, or more quantitative sessions.

JS: I’m trying to get my head around some of this. Do you have an example of a product that could harm a customer or a stakeholder and, and then how your team would come in and say, with evidence and say, this is how we might go about address, I don’t, I don’t think it sounds like for what we’ve talked about already, doesn’t sound like you come in with a fix. You come in with suggestions and data, but I’m just curious if can you give us like a concrete example so I can get my head around, like, what would a harmful product look like?

AS: Certainly. So there was a product that we worked on, called Custom Your Voice. So the whole idea here is you can take snippets of someone’s voice. So you just need 500 to 1,000 utterances of somebody’s voice. And this can result in a voice pond. But the thing about that is, it could then say things that you never uttered, actually to like a voice deepfake, right? So if you think about it, the huge repercussions if this is not developed, right. So we did a lot of very interesting work in this front. We created a gating process around this particular technology so that it’s not available to everyone freely in the market. We vet the enterprise customers that we would be providing the service to. We worked with voice actors, because they are a group of individuals whose jobs could be impacted by this particular technology to understand their perspectives. And this resulted in a set of guidelines around how companies that use the service need to be transparent with this group of individuals. And so that became a part of our terms and conditions. And this service actually has a lot of benefits too, right. If you think about it, it can be hugely advantageous for people who don’t have a voice, who have speech impediments, like it can be a confidence booster. So we also did a lot of primary research with individuals with speech impairments to try to understand their unique needs. And that resulted in a set of guidelines around how to create this service that caters to the needs of this group of individuals.

And lastly, I have to say that it’s very important when humans interact with different experiences that they don’t feel deceived. And it’s very easy in a situation like this, where you’re interacting with a synthetic voice, because this can be extremely realistic sounding. So it’s very easy to feel deceived if you don’t know that you’re actually interacting with an agent, you know, automated agent. So we also worked. In fact, my team, we did a bunch of studies, to understand the right disclosure that’s needed for consumers when interacting with synthetic voice. So we approach mitigations in like different angles.

JS: Right. So it’s not necessarily making the voice sound more computerized necessarily to get away from that problem. But it can be about warnings on the product, so that, so that the consumer is aware of it.

AS: That’s right, because extremely low fidelity kind of voice can actually be really disturbing. That hampers the user experience. Do you keep it high fidelity, but at the same time, make sure that it’s an authentic experience, and there is no deception that’s happening? I want to add, though, John, that there are absolutely certain situations that we recommend a high fidelity synthetic voice not be used, you know.

JS: Oh, okay, yeah.

AS: Be appropriate in all scenarios, like, for instance, if you’re calling 911. And that takes you to a high fidelity, human sounding voice. Even if there’s some sort of disclosure, it can give you a false sense of, you know, confidence on the consumer, because you tend to equate a high fidelity sounding voice to having high level of capabilities. And that may not be the case all the time, because it’s still an artificial agent. So there are absolutely certain situations where you want to avoid using that fidelity. And we outline all of those in our, in our guidelines for responsible development of this tech.

JS: Right. That’s really interesting. Okay. So I want to close up by maybe taking a practical, concrete approach to this. So are there specific tools, techniques, actions that you would recommend for responsible and ethical development of technology? And I might even throw in use of data as well. I think, you know, probably a lot of people listening to this, this podcast are working with data day in and day out. They may not be creating physical products, but they’re working with data. And so I’m just curious about, you know, what sort of techniques and tools you and your team would recommend for, for those folks?

AS: Yes. So, I suggest that technologies actually do the following. And I’m going to talk about, like ten things. Okay. So the first one is really simple. It’s pretty basic if you think about it. It’s really trying to determine what problem are you trying to solve? Is there actually a technology need for this problem? We actually discussed this a lot. Is this a human problem? Or is this a problem that can actually be solved through technology? So you want to understand that first.

Two, who are your impacted stakeholders? And by that I mean end users as well as other stakeholders who can be indirectly impacted by the technology. For example, people whose jobs could be impacted by the technologies that you’re building. Third step is really thinking through what are the benefits of this technology for each of the stakeholders that you just identified? And then what could be the potential harms?

I suggest using some sort of analytical approach to systematically think through these benefits and harms. Internally, we have developed certain tools that help us do this in a systematic manner. But there are also frameworks available in the public such as value sensitive design frameworks that can help think through what are the values, and concerns, and beliefs of different stakeholders? And what could be the potential harm that these technologies can bring?

Then I would really suggest that you think through some of the key ethical principles such as fairness, reliability, privacy and security, inclusion, transparency, all of that. So by that, I mean, asking yourself some key questions, which is, does your system treat all stakeholders equitably and prevent undesirable stereotypes? Does the system run safely even in the worst case scenario? Is the data protected from misuse and unintentional access? Has the system being created in an inclusive manner to make sure that there are no barriers that could unintentionally exclude certain groups of people? Are the outputs of the system are understandable to the end users? And are you finally taking accountability for how the systems are operating and scaling and its impact on society?

Sixth point that I would say is really including diverse disciplines as part of your product development process that includes social scientists, human rights groups, designers. And that’s really important, like I said earlier, to challenge dominant perspectives.

Point seven is really need to make sure that once you’ve identified these harms, create the right sort of work streams to mitigate these harms, which includes involving diverse stakeholders throughout all stages of product development from envisioning to post deployment.

Then eight point is, like I said earlier, acknowledging that people develop technologies. So you’ve got to create structures where people are actually incentivized for making ethics a core priority of work.

Point nine is making sure that you have developed role-based training, best practices, and tools that product teams can use, because principles will only go a certain way. So unless you have tools and best practices that teams can adopt and run with, you’re not going to be successful. And of course, you also need processes that will hold the product teams accountable. So those are sort of my 10 points.

But I want to close with saying that it’s really important to recognize your domains of ignorance, right? Because this is a new space. For all of us, this is a new space. We are learning by doing. And so having that humility is very, very important.

JS: Yeah, I think that’s a great point to end on. And I would probably add empathy to that. But the humility of saying that you don’t know and you’re willing to have these conversations and make these tough choices is such a key part of everything that you’re doing.

Well, Arathi, thank you so much for coming on the show. This is fascinating stuff. I hope others will learn from your experience at Microsoft and be able to hopefully take some of these tips. We’ve got 10 tips, which is great. Take these into account in their own work.

AS: Well, thank you so much.

JS: Thanks, everyone, for tuning into this week’s episode of the podcast. I hope you enjoyed it. I hope you will check out Arathi’s work and the work of her team over at Microsoft. And I hope you will consider supporting the podcast. Please tell your friends and colleagues about it. Write a review on iTunes or wherever you listen to this podcast or head over to my Patreon page, where for just a few bucks a month, you can help support the podcast, the transcription, the web servicing, the audio editing, all that good stuff that allow me to bring the show to you. All right. Well, until next time, this has been the PolicyViz Podcast. Thanks so much for listening.

A number of people help bring you the PolicyViz Podcast. Music is provided by the NRIS. Audio editing is provided by Ken Skaggs. And each episode is transcribed by Jenny Transcription Services. If you’d like to help support the podcast, please visit our Patreon page at patreon.com/policyviz.