Welcome to Season 7 of the PolicyViz Podcast! I hope you and your friends and families are well and healthy and safe in these strange and turbulent times. I’ve spent the last few months doing a lot of reading, spending a lot of time outside, and hanging out with my kids and wife. We are gearing up for a new, very different school year where I’ll be working side by side with my kids from our home in Northern Virginia.
I’m very excited for this season of the podcast. I have a great lineup of guests coming your way, working in areas of data visualization, algorithms, artificial intelligence, machine learning, and community development. And I’ll be spending time talking with my guests about how these aspects of their work and experiences intersect with race groups, gender groups, wealth and income distributions, and more.
You’ll also note some changes to the show. I have new theme music (from The NRIs) and have made some other changes to audio quality and editing. I’ve also changed the show notes page a little bit.
To start this season, I invited Kandrea Wade to talk about her research and her work. Kandrea is a PhD student in the Information Science department at CU Boulder focusing on algorithmic identity and the digital surveillance of marginalized groups. Along with developing her research at CU Boulder, Kandrea seeks to discover and assist in creating proper ethical regulations and education on algorithmic identity and digital literacy. With a background of over 15 years in entertainment and media, her interests have evolved from demographic programming for entertainment and media theory to corporate user ethics and legal protections for the digital citizen. Kandrea holds a BA in technical theatre from The University of Texas at Arlington and an MA in media, culture, and communications from New York University.
Support the Show
This show is completely listener-supported. There are no ads on the show notes page or in the audio. If you would like to financially support the show, please check out my Patreon page, where just for a few bucks a month, you can get a sneak peek at guests, grab stickers, or even a podcast mug. Your support helps me cover audio editing services, transcription services, and more. You can also support the show by sharing it with others and reviewing it on iTunes or your favorite podcast provider.
Welcome back to the PolicyViz podcast. I’m your host, Jon Schwabish. I hope you and your friends and families are well and healthy and safe in these strange and turbulent times. I’ve spent the last few months doing a lot of reading, spending a lot of time outside, and hanging out a lot with my kids and my wife.
We are gearing up for a new, very different looking school year where I’ll be working side by side with my kids from our home in Northern Virginia. But I’m very excited to kick off Season 7 of the podcast and I have a lot of great guests coming your way. You’ll also notice a couple of new things about the show – new intro music, new outro music for one, but the same high quality sound editing. I’ll be providing transcriptions of the show and, of course, great guests.
I also have a few new things on the blog. I have a new series called the “On…” series where I write short, sometimes undeveloped thoughts or ideas about data visualization and presentation skills. I’m also entering the final stages of working on my next book, ‘Better Data Visualizations,’ which is set to come out in January 2021.
Personally, in response to the many protests about police brutality, and inequality that have been rocking the United States, I’m trying to take a more racial equity lens to my research and to my data visualization work, and I’m trying to extend that perspective to the podcast. So, in that vein, in this new season of the podcasts, you’ll hear more from people of color, and from people doing work to serve underrepresented groups and communities.
I’ll also be doing a number of talks in the next few weeks, one for the New York Data Visualization Meetup and another one in October for the IEEE Vis Conference and I’ll put those links on the show notes page if you would like to join me and learn more about the work that I’ve been doing in these areas.
So, to kick off this season of the podcast, I am excited to welcome Kandrea Wade to the show. Kandrea is a PhD student in the Information Science Department at CU Boulder. She focuses on algorithmic identity and the digital surveillance of marginalized groups. We talk about what brought her to this area of research, which is a really interesting story, and the different projects that she has underway in her lab at CU Boulder.
So, I’m looking forward to bringing you a great set of guests this year and I hope you’ll continue to support the podcast by sharing it with your friends, rating and reviewing it on your favorite podcast provider; and if you’re able to support it financially by going over to my Patreon page. So, let’s start Season 7. Here’s my conversation with Kandrea Wade.
Jon Schwabish: Hi Kandrea, thanks so much for coming on the show. How are you?
Kandrea Wade: I’m doing very well today, Jon. How are you? Thanks for having me.
JS: I’m doing great. I’m really excited to have you on the show and talk about the work that you’re doing. I wanted to start by letting folks know how I found out about you. So, earlier in the year, I signed up my kids for the ‘Skype a Scientist’ program, which is great free endeavor that folks are doing where scientists in all sorts of different fields get up and virtually talk about their work and talk with kids and answer questions. So, I signed my kids up and they attended one on turtles and one on someone doing something with whales or something like that and they were, sort of, moderately engaged. So, I saw yours come up, and I was like, “Ooh, this one looks really good.” I said, “Hey guys, do you want to watch this. This one was going to talk about bias and algorithms and machine learning.” Of course, they just rolled their eyes at me and walked away. But I watched the whole thing and found it really interesting. So, I’m really happy that you were able to chat with me because you’ve got this new paper out. I just want to learn more about the work that you’re doing. So, maybe we can start by just having you talk a little bit about yourself and your background and then we can talk about the work you’re doing.
KW: Absolutely! Thank you. Thank you so much. So, about me; I’ve always been in love with art and media and technology which has led me to a super diverse path. So, I have a bachelor’s in technical theater and I worked as a technical director in a lighting designer for over 12 years. In that time, I taught at a college I worked at several concert venues, I held technical director positions. Then, from there, I started working in events and film and TV. I worked for South by Southwest, Viacom, Bravo, Disney, ABC and several other production companies.
Over the time, in addition to entertainment, I’ve also worked in education for 15 years. So, I worked as a senior admissions advisor for the Princeton Review. I did that before I went to NYU for my Master’s in Media, Culture and Communications. So, at NYU, I wanted to study demographic programming for production companies, and TV, film, streaming services and as I moved through my program, I realized that in order to do demographic programming, you have to have user data, of course; and this made me start thinking about what exactly these companies can see, and what they’re doing with that user information.
So, this introduced me to the whole world of user data ethics and the bigger questions of how we as humans are sorted and categorized in systems. So, I started taking all of my electives and data science and applied statistics and then made my master’s program a combination of media studies focused with data ethics emphasis. I started to focus on bias and ethical dilemmas and the usage of user data, specifically now looking at tech, corporate systems, operations, and government. This led me to CU Boulder, where I’m currently working on my PhD.
My focus is in algorithmic identity and the digital surveillance of marginalized groups. I am a part of the Identity Lab, which is led by Jed Brubaker and the Internet Rules Lab where we focus on ethics and policy and that lab is led by Casey Fiesler. So, that’s just a little of an overview of me.
JS: Yeah, you have really come to this from far afield.
KW: Absolutely, but it’s putting together both sides of my brain that I really love. It’s art and technology, and it’s computers and people. So, it’s really a match made in heaven, perfectly.
JS: Well, you seem to have like the practical background to see how the actual production work is done. So, how all this feeds together in that whole ecosystem of advertising and actually producing the media.
KW: Absolutely! That’s a lot of it — when it came down to when I was looking at the demographic programming. I was looking at streaming services a lot. The streaming services, kind of, like Netflix being a black box you don’t really know how they’re curating what you’re seeing on your page. Everyone has different sorts that come up whenever you look at your Netflix. So, I found that to be super fascinating. So, in trying to target audiences for what media they wanted to watch, I was like, oh, audience targeting, people targeting, this is all very fascinating to me. So, it’s just kind of trying to understand how people work and what they want and how that’s now being done through computers and algorithms, which is super fascinating to me.
JS: Right. Can you talk a little bit about this phrase that you just mentioned algorithmic identity? What does that mean to you? What does that mean in the lab and where you work?
KW: Absolutely! So, we all have our physical identity – you have your race, your gender, your cultural identity, your ethnicity, where you come from as a person. But all of that now, especially as we all move forward with these profiles, and all the digital footprints that we create with all the movements that we do online, especially now and the times that we’re in where everyone is on their computer, you now have a digital copy of yourself. That copy of you is not necessarily completely representative of who you are as a person.
So, your algorithmic identity is kind of like I would say a proxy of who you are as an actual physical person. So, for every one of us that’s physically walking around in the world, there is a digital copy of us that’s existing inside of these systems; that’s being categorized and looked at sorted all the time. So, in my lab, we look at algorithmic identity and we’re trying to figure out ways to define this, ways to explain this to other people, ways that we need to maybe have protections for what we now call the digital citizen. Because it’s not just about your rights in the physical world anymore, it’s what rights do we have, like, GDPR or ‘To be forgotten’ or what rights do we have to delete our data or what rights do we have to even pull that data if — to understand how many data points there are on us floating out there in the world.
So, there aren’t as many regulations and policies and laws as there are for us in the physical world. So, we look at the algorithmic identity as an extension of the self and what we need to be doing to make sure that, that self is protected as well.
JS: Right. I want to ask about the paper that you published in May, but maybe I’ll get that through a quick segue question. When you are working on these algorithmic identity measures and profiles and you mentioned earlier distinguishing between different demographic groups in the paper, and specifically you’re talking about race and gender. How does that interplay with how our virtual avatar exists and how companies and governments are using that information differentially across these different racial, ethnic, gender, and other groups?
KW: That’s a great question. So, something that happens a lot of times is that as we create these profiles is something that we have is __agency__00:09:34 in them. So, we get to determine, you know, for instance, on Facebook or Snapchat, or whatever your social media profile is, you get to make the determinations and self report your gender, your race, your identity, your age, if you want to, if you don’t want to.
A lot of times that’s a really great way that we can actually represent ourselves, but there are also other data points that are being tracked by healthcare companies, insurance companies, credit scoring, and things like that, that we don’t necessarily have as much control over. So, it’s what maybe the government defines us as and what markers we have to take on boxes for census purposes or birth certificates. Those can be literal boxes that we’re put into that we have to define ourselves. Those little data points get put into systems where there are basically assumptions made about us or there are matching that’s done for certain profiles.
So, there are different implications that come from that. What we see a lot in this field is that typically the same, sort of, socio economic, really real life physical bias that we see and some of the discrimination that happens in the real physical world is now unfortunately being transferred into these digital systems. So, the implications that we see a lot of times happen to be really affecting, you know, like I study marginalized groups, they really affect people of color, LGBTQIA+ groups, people with disabilities.
They are either not being considered or due to bad historical data that’s been collected on them or biased historical data that’s been collected we’re training the systems in a digital space now to reflect the same bias and discrimination that happens in the physical world. So, that’s a lot of what we look at and how we can maybe mitigate and solve a lot of that because computer systems can do what humans can but faster and at a bigger scale. So, we want to make sure that if we’re going to be categorizing humans in this way in systems now that we can account for the bias that’s being input into those systems.
JS: Right. So, I want to ask about the data collection part, but I want to give you a chance to talk about this paper that you’ve published, because it goes hand in hand with what you just said. So, the title of this paper is, and I’ll post it to the show notes for folks who are interested, ‘How We’ve Taught Algorithms to See Identity, Constructing Race and Gender and Image Databases for Facial Analysis.’ So, I thought maybe we could start by having you just give us the overview of the paper in general terms and then we can dive into some of these details. I have a few questions for you about the paper itself. Yeah, maybe just give us an overview of what the hypothesis is. Then, also maybe talk a little bit about how the data are collected in both this paper and just more generally. I mean, I certainly don’t have a background in this, so I’m just curious how the analysis and the data are actually collected and used.
KW: Absolutely! So, this was a study of 92 image databases that are utilized as training data for facial recognition systems. So, we wanted to analyze how they expressed gender and race and how those decisions were made and annotated. I mean, if they even were annotated, and how diverse and robust those data, image databases are.
So, our findings showed that there are actually issues with image databases as to be expected, but those issues specifically surround the lack of explanation and annotation when it comes to how race and gender is defined within those databases. So, we often found that a) gender was only represented as a binary. So, that is as in male/female, except for a few instances that accounted for it in their reporting, but still only contained images that were listed in the binary. Then b) we came across issues of race either being defined as something insignificant, or indisputable, or apolitical when we know that in the physical world there are many layers of sociopolitical factors like status, income, country of origin, parental lineage; you know, all of these things play into how someone’s race or ethnicity is defined. We also noted that the diversity of these databases was often lacking.
So that, again, contributes to the problems that we see so often in facial recognition systems and their ability to recognize diverse faces in especially those of color in individuals of trans identity.
JS: So, when the facial databases have this information, I assume it’s being collected in multiple ways. So, one is I as the individual, there’s a picture of myself on Facebook and I can tag myself, gender, race, but whatever, Facebook, whatever tool options they give me. Then that informs how an algorithm might assign those characteristics to that image as well; is that correct?
KW: Yes, that’s correct. In this paper, in particular, we were looking at databases that had already been built for public use, for corporate use, and for things like that. So, these had already been built, specifically, let’s say from a lab that recruited people, they wanted to build a database, they wanted to give it to people, sell it to people, but they just went ahead and put out a call for faces and then they decided to collect those images and then they categorized them. So, a lot of these databases were already pre-built as a package to be given or shared with the world to train their own systems on images.
JS: Interesting! I want to get more into the content of the paper. I do want to start with the title, because I think that title is actually telling with how the language in the rest of the paper. So, the first part of the title of the paper is ‘How We’ve Taught Algorithms to See Identity,’ and I was wondering if you could talk both about the connotation of that; of how it’s not just algorithms don’t just exist, like someone has to build them and train them. Also, how you and your coauthors and also, I guess, the folks in your lab view how these algorithms are constructed in terms of how they reinforce the stereotypes, the discrimination, the racism, and prejudice that you’ve already mentioned, but yeah, I guess just that overall sense of how the title here and the language throughout the paper is more, I don’t know, it’s certainly active, but it’s also takes responsibility for these algorithms as opposed to just saying, “Yeah, they just kind of exist and off they go.”
KW: Yes. So, I would say that the use of this language could be considered how we’ve taught — is maybe we could say it is the royal we, which accounts for all of us in the field – so that researchers, practitioners, coders, even the participants who provide the images that go into these databases. We’re all responsible for teaching systems of AI and machine learning how to do the jobs that we’re asking them to complete. So, it’s up to us to do a better job of ensuring that those systems are fair and equitable for all races, cultures, or gender identities. These systems are really no smarter than a toddler, essentially, and will never do anything more than what they’re told with the information that they’re given. So, when it comes to machine learning, well, that machine needs to be taught. Like I said, it’s up to us as those teachers to give those algorithms their best shot at being as accurate and equitable, fair and representative as possible of all of the people that it’s trying to assess.
JS: There’s a lot of talk in the data and data visualization field about being responsible consumers and users of data. I’m curious if you’ve thought about how consumers of, I guess, this information, which is sort of a weird things. So, I think about a lot of the media that we’ve talked about earlier, it’s — my ads on Gmail and Google are being targeted towards me, but what can we do as consumers of information to, I guess, try to be a responsible consumer of this information, maybe it’s the easiest way to say it.
KW: So, and it being just to every person who’s using a computer or things like that. It depends on if you want to be identified or not. I mean, there are two different ways to look at this. It depends on if you want to be identified and if you do want to be identified, ensuring that it’s accurate and so from what I have gathered generally in my research, a lot of people are very uncomfortable with being identified especially in marginalized or communities or protected classes, groups and others that may be at risk of surveillance essentially.
So, what those people do is a lot of people spend time obfuscating their identity. They would rather not be found in a system whatsoever and if they are found they don’t want that to be accurate. Now, a lot of people don’t know that there’s a backend feature to Google, and I think you can do it in Facebook as well, where you can go in and see what they think of you for marketing and ads. So, you can go in and see what they think your political leaning is, what they have assessed your race to be if you’ve never even entered that information. You can go look this up and a lot of people would rather that information be inaccurate because they don’t want ads targeted to them.
They don’t want an online system that can make a determination about credit scores or things like that. They don’t want that to find the information about them. But if you do want to be found online which is completely reasonable to or if you do want to leave this digital footprint, the most you can do as a consumer is just ensure that it’s accurate. So, there are ways that you can go into your own profiles and edit your information to make sure that it is as in line as possible with who you are as a physical person. But also, go into the backend of Google, see who they think you are and then you can either change the settings in there manually, or you can change your user behavior to be more in line with who you are as a person. It all just depends on how much you want to be involved in this digital space and that’s up to every individual to make that determination for themselves.
JS: Right. Before I ask a little bit more about the paper, I wanted to turn back to something you just mentioned, because I want to make it clear for folks who may not be thinking about this, that what we’re talking about here is not just you’re scrolling through your newsfeed and there’s an ad for the thing you just looked at on Amazon and just shows up in your app. It’s not just about advertising. I was just wondering because you had just mentioned credit scores, which I think is a great example of credit scores, health, and housing. I was as just wondering if you could talk about a few of the things where these algorithms can reinforce these stereotypes and discriminations that you’ve already been talking about.
KW: Absolutely! So, I’ll talk about credit scores and I’ll talk about when it comes to insurance and maybe loan determinations. So, there’s a lot of different data points that are used; like we’ve been talking about, but one of them — and they’re all proxies of who you are as a person. It’s they don’t know exactly who you are, so they have to use other data points to make assumptions. Again, a problem; anytime we make assumptions that’s not a good thing. But zip code. Zip code is one that is used really widely to make determinations about who you are as a person and this goes into bias. This is how we have gerrymandering. We have a lot of lines that are drawn that separate people from different resources, school systems, hospitals; that are also put into these algorithms that make determinations about if you are worthy or deemed worthy of receiving something like a loan or deemed worthy of receiving a certain type of health care or at a certain rate.
So, these things are determined by where they think you live and what type of neighborhood that is or any person can decide to buy a house in any neighborhood, but they make these assumptions based off of what they typically and historically have seen of those neighborhoods, whether it be a more disadvantaged neighborhood or they see it as being very wealthy and lucrative neighborhood, something as simple as your zip code to make determinations about how worthy they deem you to be.
When it comes to things like health care, there are different issues that come into play with the reporting that doctors have done. When doctors have in the physical world been — there’s been a lot of discrimination and people not recognizing the symptoms may be different in African-Americans or in black people versus symptoms that represent, let’s say, for heart attack in white people. So, those same biases that were written into charts get input into these algorithms. So, there are misdiagnoses that happen even with the assistance of AI, based off of there not being fair reporting on what these symptoms are looking like and who deserves to have treatment for them or not.
Then you have, things like — like I was saying, you have finances that are being tracked as you make your purchases online. It’s not just about the ads that you see, it’s about these algorithms also being able to see what you’re buying, when you’re buying them, how much money you have in your bank account, and how much credit you have. With all of those things being put together, it’s making a profile on who you are as a buyer or as a consumer. So, that’s also leading into determinations about what you may be deemed worthy to receive or not when it comes to making requests or what ads you see or making requests for loans or even purchases that you have.
So, those are all things that are being tracked and systems all the time. They’re not just trying to target you to sell you things. They’re also trying to see what you’re buying to make determinations about who you are as a person and where you are buying these things. Are you shopping at Wal-Mart, are you shopping at Nordstrom, are you shopping at Barneys New York? Those are all very different things.
JS: Right. Yeah, I think Virginia Eubanks has this great book on this topic and I think she has described this as the virtual redlining of our society where we’ve moved now from these physical maps of housing discrimination into the virtual world, which as you mentioned, works a lot faster and a lot broader because we have computers doing it all or computers are doing it now informed by the decisions that people are making when they build the algorithms.
JS: I wanted to ask one last question about the paper. So, there’s a sentence in the paper, for me, really struck me because I focus a lot on good annotation and data vis and I thought this was really interesting. So, in the paper, you and your coauthors say, “…further when they are all annotated with race and gender information, database authors rarely describe the process of annotation.” I was hoping you could talk a little bit about what annotation means in the context of your field and your research.
KW: Absolutely! So, in this particular study, we’re looking at images, right? So, when it comes to writing a description of an image of a person, and giving that description to a computer, those algorithms need information like race and gender, or at least in the systems as they’re built of this moment, they need markers like race and gender to be able to start to sort, categorize, and match similar images.
Well, those descriptions are written by the people who are developing those databases of images and we often found that this process was done in a very vague but determinant way. So, when the individuals who collected these images saw what they presumed to be a black person, they labeled the images black; when they saw who they thought to be, white, Asian, Indian, male, female, they made these determinations for the subject in the image, and typically made this with no clear distinctions or justifications for why and how these assignments were made, aside from, ‘well, we just did it.’ So, these distinctions and justifications would be the annotations.
So, it would be some, sort of, previously defined set of rules or guidelines that would inform exactly why the images were labeled as they were. There would need to be guidelines for what is defined as a woman visually, what is defined as a man visually, what is determined as to be black and white, and so on. But without these clearly defined rules and guidelines, which could then be argued, disputed, iterated, or improved upon we’re just left with determinations on images of individuals that may not be accurate or true to what they would see as their own identity. Then there’s no way to then argue them or refine them to be better and more accurate for these people.
So, it’s basically just like, someone said so, and that’s why that’s not a real annotation or justification. It’s just because, ‘I said so.’ One of the main issues with databases attempting to determine identity is that the identity of the subject is often reported for the subject, instead of allowing the subject to self identify. Then again, those in charge of creating the databases have now essentially defined race and gender for an entire set of individuals without considering their lived embodied experiences or their positionality.
So, in the paper, we talk about this as the visible versus invisible features of identity. So, think about it; if you don’t have a diverse group of people may these determinations, and they are only using categorizations that they defined and do not explain, it’s really easy to see how this leads to contextual collapse. It removes a lot of potential variance in diversity that we could have in these databases. Not to mention that, again, the individuals who are typically pulled for these images are not representative of many diverse populations that we have in the world, and they often skew toward being white or white appearing people.
So, if we take all of that into account, and then we teach it to a system, and we tell it to read a face, and let’s say it’s a black, Native American trans person’s face, the system already cannot recognize it as a trans person due to it only being capable of reading gender in the binary, male/female. It also now has trouble reading a black face due to a lack of training images in the database, and then it’s lost in finding distinctions between black and Native American because it was never told that Native American was something to look for.
So, now we have a system that can’t identify a person that it’s attempting to read and if it does, it will output incorrect information just doing the best with what it’s been told. So, this leads to many issues in facial recognition that have really serious implications. Like, right now we have a lot of issues of misidentification with things like traffic cams, and street surveillance being used, especially right now during the protests. There are a lot of issues of a hard time with identity verification, especially for diverse people when it comes to like passports and IDs at airports. They use a lot of facial recognition for that, and it slows down the process of diverse groups of people being able to even get through security to get to another country and systems just not identifying diverse subjects at all, and giving back feedback or error responses.
So, there’s an entire world of issues that plays into this, but that’s essentially what we’re looking at in the paper and that’s where we were looking at annotation, with the direction _____00:29:00 requirement there.
JS: Really interesting. Well, I’ll post the paper or a link to the paper on the show notes. Before we go, I wanted to ask what work you have on the horizon. What’s the future bring for you in terms of your work and your dissertation and the lab?
KW: Oh, thank you for asking. Yeah, so I’m doing some work. I am right now looking at how qualitative researchers conduct their work. We’re trying to find some ways that maybe we can have a better understanding of how qualitative researchers conduct their work in their analysis. If there are ways that we can maybe potentially build tools to be able to help them to be able to do these things. I’m also looking at some issues of where in the country do we see the most lacking of data literacy and how do we help those communities to be able to inform themselves, educate themselves on how to be smart consumers of data like we were just talking.
I am also looking at, right now, the things that are going on with the protest and protest surveillance, and moving forward looking at a dissertation or work later down the road. Like I said, I have a background in media and entertainment and I really have a love for theater arts because I feel that it has a way to connect with people in a way that a lot of other things can’t and so entertainment is extremely powerful in being able to disseminate messages.
So, data literacy is a very huge subject to try to teach to people and so I do see, down the road, that I’d like to incorporate data literacy into messages of passive learning via entertainment, theater arts, and so trying to make that – it’s a bigger goal but I’ll refine it as we go; but those are really what’s on the horizon for me.
JS: That’s great. That sounds like great stuff and sorely needed and I think you have a lot of work to do.
KW: Well, thank you.
JS: It sounds like great work.
KW: Thank you so much.
JS: Kandrea, thanks for coming on the show. This was really interesting. It was great to chat with you and I really appreciate it.
KW: Absolutely! Thank you for having me.
Thanks to everyone for tuning in to this week’s episode of the PolicyViz podcast. I hope you enjoyed that interview with Kandrea Wade. I hope you’ll check out the various links and resources that I have put up on the show notes page. You can go check out Kandrea’s bio, you can check out her research and her work over at her lab at CU Boulder, and you can check out the various talks that I’ll be giving over the next couple of weeks. So, until next time, this has been the PolicyViz podcast. Thanks so much for listening.
A number of people go into helping making the PolicyViz podcast what it is. Music is provided by The NRIs, audio editing is provided by Ken Skaggs, transcription services are provided by Pranesh Mehta, and the show and website are hosted on WP Engine. If you’d like to support the podcast and PolicyViz, please head over to our Patreon page where for just a couple of dollars a month, you can help support all of the elements that are needed to bring this podcast to you.