Niklas Elmqvist is a full professor in the iSchool at University of Maryland, College Park where he directs the Human-Computer Interaction Laboratory (HCIL). His research area is information visualization, human-computer interaction, and visual analytics. He was elevated to the rank of Distinguished Scientist of the ACM in 2018, one of only 40 people receiving this recognition that year.
In this week’s episode of the podcast, Niklas and I talk about his research on accessibility and animation, as well as a whole slew of other topics.
My new book–Better Data Visualizations: A Guide for Scholars, Researchers, and Wonks–is now available for pre-order. This has been a book a couple of years in the making and I’m very excited to have it come out later this year. Check it out on Amazon or your favorite bookseller.
Support the Show
This show is completely listener-supported. There are no ads on the show notes page or in the audio. If you would like to financially support the show, please check out my Patreon page, where just for a few bucks a month, you can get a sneak peek at guests, grab stickers, or even a podcast mug. Your support helps me cover audio editing services, transcription services, and more. You can also support the show by sharing it with others and reviewing it on iTunes or your favorite podcast provider.
Welcome back to the PolicyViz Podcast. I’m your host Jon Schwabish. I hope you and your friends and your family are all well, healthy and safe in these strange times, and I appreciate you coming back and listening in to the podcast. On this week’s episode, I’m very excited to have Niklas Elmqvist who’s a full professor at the College of Information Studies at the University of Maryland in College Park, Maryland, which is just a short drive away from my home here in Northern Virginia. I reached out to Nicholas because he does some really great work at the University of Maryland, and in particular, we talk about his work on accessibility which, as you probably know, has been a subject of a couple of different interviews and episodes over the last few months, and we also talked about his work on animation in data visualization. So we sit down and talk about all these different issues and it’s a great conversation and I hope you will enjoy it.
Before I get to that, I just wanted to let you know that my new book Better Data Visualizations is finally out for pre-order on Amazon. The book is hopefully going to come out this fall, it’s all done, we’re in the last stages of finishing the proofs and getting everything laid out, so I’m very excited for this book. It’s a book that I had put off writing for many years and finally sat down and was able to pull this thing together, and I’m really excited about what I was able to put into it and the ground I was able to cover in just one book. So I’m very excited for that. If you’re interested, please do head over to Amazon and take a look at it and maybe submit your pre-order. I put the link in the show notes. So again, I hope everyone’s well in these strange times of the COVID pandemic, and I hope you are staying safe and staying healthy and staying inside and taking care of yourselves and friends and neighbors and loved ones. So on to this week’s interview with Niklas Elmqvist I hope you will enjoy it, and here we go.
Jon Schwabish: Hi Nicolas, how are you, how are you doing?
Niklas Elmqvist: Pretty good. How are you?
JS: I’m doing fine. A little bit of a rainy day today, but that’s okay, you get to just look out into the world as I sit here working just get to look beyond my computer into the world. Well, thanks for taking time out of your day and coming on the show. There’s a couple of strands of your research that I’m interested in focusing on and chatting about, and so maybe for folks who aren’t familiar with your work, you can talk about your background and the lab over there at the University of Maryland and then I can pester you with questions.
NE: Sure. I am Niklas Elmqvist. I’m a professor at the University of Maryland in the iSchool Information Studies Informatics, and also the Director of the Human-Computer Interaction Laboratory, the HCIL which is actually the oldest HCI lab I’m told in North America. We were funded in 1983 by Ben Shneiderman, and I’m the sixth or seventh director depending on how you count, and just like Ben, who founded the lab, I’m also a data visualization researcher, and I teach also data visualization, data science, and actually also a little bit of game design recently, I’m a gamer as well.
JS: Oh interesting. Okay, I didn’t know that, so we can talk about that a little bit too.
JS: Do you want to talk a little bit about maybe the history of your research and how the threads have sort of woven back and forth and what you’re working on now?
NE: Sure. Well, I am, for lack of a better word, a data visualization generalist which means I think I work across many different things. It might mean I just have a hard time focusing on one thing. Another way to put it is I’m a full stack data visualization person, maybe that’s nicer, but that means my work ranges from low-level graphical perception stuff which I’ll talk a little bit about today, all the way to devices and technologies, hardware for data visualization, even straying into pure human-computer interaction research. I’ve done work on pointing and selections and even electronic readers and all kinds of things for those fields. So it’s really wide, it’s an interesting mix. I guess, I have a hard time nailing down exactly what I am passionate about – I’m passionate about everything.
JS: Right. Well, there are two particular strands of research that I’m interested in. One is your work on accessibility in data visualization and the other is on your work on animation. Accessibility has been a topic on the show the last few weeks. So I think that would be nice to continue the discussion. The last few folks I’ve spoken with are more on the practitioner side of things. So I’m curious about your research on accessibility and then we could talk about animation. Maybe just give us a quick roundup of your accessibility work.
NE: So it’s interesting that this is happening because it seems like a lot of things are coming together. I’ve been listening to those recent episodes where there’s been accessibility in this podcast, and I think it’s exciting, and it’s curious how it happens because in our case, obviously, these are well documented situations and a lot of compelling research questions and how do we make data universally accessible. But in my case, it was a bit of a journey of discovery because about a year and a half ago I was told that a student from University of Maryland was enrolling in my data visualization class and this student was basically legally blind. It was very jarring to us because we had not had this situation before but, of course, it may seem like a paradox but, of course, thinking about it, even a blind person deals with physical space, they have to navigate 3D worlds, they’re very familiar with shapes just as we are, so it shouldn’t be a surprise, but much of DataViz has basically neglected to focus on this population. In our case, we came up with a very low-tech solution to make it possible for this student to take the course because, of course, he’s not learning just how to read DataViz but also how to create them if you’re taking a course like mine. So we had to come up with a solution with just a metal board and magnets where the assistant would arrange those magnets on the board, so they would match what’s on the slides and he could explain and even create his own visualizations.
JS: Well, interesting.
NE: And then this has led to, I read a big new research agenda in my lab with two students who are working on various things from physicalization, turning data into physical form so that you can feel them with your hands and fingertips or even your body, but also more traditional things like sound using audio and even some recent work where we are using smell.
JS: Well, interesting. I love the fact by the way that this research track was inspired at least by a real-life example, a real-life challenge that you had to overcome. I mean, I think that’s like a great way that research starts and just being able to help a student better understand the topic, but can you talk about this research, and part of the thing that I find interesting about this discussion of accessibility is the difference between when we make static graphs where most people say, well, let’s use different types of colors but the big thing is alt text, let’s add that alt text. And then the interactive side of things of data visualization where I’m not sure people have thought through a lot of the issues, so I’m curious about that.
NE: Absolutely. I think there are many things that you touched upon that is relevant to what we do. Overall, the interesting thing and I know it’s been noted before, when we do improvements to help this population and, of course, this is not a small population, it’s important to think of this, because there’s something like 300 million individuals that have visual impairments in the world and 40 million of them are totally blind, and here in the US, it’s somewhere between 7 and 15. We’ve been working with the National Federation for the Blind in Baltimore and it’s exciting to see many of these improvements like audio books came about because blind people wanted to read too but, of course, now, millions of people, who are not blind, use them all the time.
NE: And that’s true, that’s this curb cut effect curb cut effect that I heard discussed before where improvements for universal accessibility will help people who are not necessarily in a wheelchair, but are temporarily under disadvantage.
NE: And that’s true here too, I mean, you mentioned alt text, and alt text are great but the problem is a lot of the visualizations on the web, if you Google images for a chart search, for example, you’ll get lots and lots of images that don’t have alt text, they are just pixel maps with charts like bar charts or pie charts or something where the pixels themselves don’t carry the data and a screen reader will not know what to do with those because there’s no alt text and the screen reader can’t just turn the pixels into natural language, of course. So in a project that we did about two years ago, we used machine learning to translate these images into shapes so that we could then recover the data from. So at the very least, we were able to replace an image of a bar chart or a pie chart or line chart or something into the data table where it was created, where that type of data is usually not available. So that’s a first step. But of course, there should be more ways to do this. You mentioned interaction – in one of the projects that a student of mine is working on right now, he is working on building a little robot, a three-wheeled robot, looks like a triangle, size of your palm, and it has a handle that you could put your hand on and that handle can turn and it can vibrate. And the point of this device is that you could put it down, it’s wireless, you put it down on a surface, a flat surface, and you connect using a Bluetooth or WiFi connection to your phone or your laptop and then you activate the device and you grab it and it will start moving to describe a shape, and you can feel, for example, I don’t know, the stock market value of Google over time or the temperature around the world changing with seasonality or something. And of course, since these engines on this robot are back drivable, we can also use it for interaction. You can basically use that thing to explore that space as well.
JS: I mean, obviously, there’s a long road ahead here, but do those sorts of devices, do you see them working differently in a mobile environment than in at a desktop environment.
NE: Yeah. So that’s absolutely true. Many of the accessibility devices out there are not necessarily mobile, a refreshable braille display which can be really costly that turns text into braille characters, there are mobile versions of those but they’re small, they only maybe show 10 characters versus there’s bigger ones, so that’s certainly something that you have to keep in mind. This particular robot that we’re working on, we’re trying to make it mobile, so that, of course, you can have it in your pocket and put it down on any surface. But yeah, the thing that has happened that we’ve also been looking into is beyond that it’s screen readers with smartphones being so ubiquitous, where they are everywhere has really revolutionized information access for blind people where they would use screen readers on their smartphone all the time.
JS: Right, because we’re on it all the time. Have you thought about or done any work on other types of impairments – I mean, I think, we would probably both agree that in the DataViz InfoViz community, the first thing we think about, I would say, most people probably think about, are vision impairments or difficulties. But there are all other sorts of impairments or limit people’s ability to access information, so physical disabilities, cognitive. Have you thought about any – I mean, I don’t know if you’ve thought about these or have started any research on these, but I am curious to hear how the sorts of things that you may be thinking about.
NE: Yeah, I mean, it’s true that for vision, it happens to be a particularly, I don’t know, big elephant in the room for us DataViz folks, because we rely so much on the magic aspects of vision and we’re seeing its praises all the time, but we don’t recognize that not everyone has full use of their vision. In terms of other types of disabilities or impairments, in my group, we haven’t looked at any of those beyond vision except, like I said, some of these research approaches are based on sensory substitution which means, of course, you replace vision with another sense and that type of philosophy, of course, can be applied, let’s say, if you’re deaf and you don’t have use of your ears, you can use the same general approach of sensory substitution, and one of those that might be relevant is this, I mentioned early on, this use of smell which sounds a little like a joke because, of course, interfaces don’t typically smell. It’s a very underused sensory channel, but again, here’s the curb cut effect, because there are situations where you cannot see, you’re blind perhaps, but you cannot – there are other situations where you cannot look, so let’s say, you’re a fully sighted person, you’re driving your car, you can’t spend time looking at the screen, and then using sound or using, let’s say, smell in this case, could be useful. And I’m sure that could also be applied, like I said, to a deaf person or potentially someone with cognitive impairments because smell is such a primal type of sense, it’s tied so strongly into memory, and the project that we did, we built three prototypes of olfactory displays, that’s the technical name for a smell interface essentially – just like a screen is a visual display, it generates color pixels, an olfactory display generates smells. So we built several prototypes of these, mobile ones, as well as tabletop ones, and the most recent one is a big device, it has 24 bottles of essential oils and little ultrasonic diffusers that we can turn on and off and that, just like a humidifier at home, we can generate and mix and blend smells and then send them into basically the nostrils of the user. We haven’t used them that much as a replacement but more as a complement, but there are certainly situations where you could try to replace vision instead of complementing it.
JS: Right. Wow, that is amazing. I am going to see if I can do this cool segue here. So we’ve talked about stat, accessibility with static visualizations and interactive visualizations, but somewhere in the middle somewhere, are animated visualizations, and you’ve done some interesting work on animation. And so maybe that wasn’t a great segue, I don’t know, but anyway, you’ve done some interesting work on animation and in the paper at least, the paper that I’ve read and will link to on the show notes, what I like about the framing of that paper is you bring it all the way back to Gestalt principles, which I don’t know if a lot of people in the DataViz field think of it that way. Right? We think of maybe pre-attentive principles or principles when it comes to static things that color or things are grouped together, but there are these principles about animation and motion, and so I like the way that framing of that paper. So I was hoping you could sort of explain that framing and then talk about the actual research.
NE: Yeah, it was interesting to do a little bit of a journey back in time when we wrote this paper because graphical perception in general, I mentioned earlier, is this interesting intersection of vision science and perceptual psychology on the one hand and data visualization on the other hand, where we’re trying to figure out how can we build visualizations that match how our vision system works and how can we figure out how to make it better. And a lot of the seminal work, as you know, in graphical perception, is relatively recent. I mean, it’s people like Jeff Bertan, cartographer, I mean, some time ago but still not that long ago; and then Bill Cleveland and McGill that did work on graphical perception and for statistical visualizations. But if you rewind a little further, you’ll see that a lot of the work that was done on early work on visual perception was in the early 1900s, and it was called the Berlin School in Germany of experimental psychology, and they eventually came up with this notion of Gestalt psychology which is a theory of mind talking about how the whole is bigger than the parts and how that works in terms of the things we see, and essentially they came up with maybe something like five or six so-called just up laws that say how we humans group elements we see in our field of view into whole components. So things like proximity, when we have several things close to each other, we tend to think of them as a group, and that’s commonly seen, you know, you have a scatterplot of dots, and if they’re grouped together, you think of them, oh those are the certain cluster of behavior and then you have things like similarity where two objects that have similar visual appearance, same color or same shape, you tend to group them together. And then there’s additional ones, the thing that we were interested in this particular study that you mentioned was the law of common fate which is the only Gestalt psychology law that deals with things changing over time. So basically it says, if several elements are behaving in a similar way, we humans tend to think of them as having the same or common fate and then we group them together. Commonly, we use this idea for animation, so elements that move in the exact direction and speed, tend to be grouped. But what we looked at in this particular study was whether this idea of changing together applied not just to animation or at least not just to motion, elements moving in the same direction and speed, but also to dynamic behavior like whether they grew together or shrunk together or whether they changed color together.
JS: So as we are looking at an animated bar chart, for example, let’s take the bar chart race…
NE: Right, yeah.
JS: Yeah, so as we view those, we see them moving together as a singular group.
NE: Yeah, so well, they have to move together in the same rate, so if you have a big bar chart and you have two bars that even if they’re not next to each other, if they have the same behavior, so they grow about the same clip, then that the notion is that that you think of as two elements at the same time. Another example is – sorry, one group that moved together is one – another example is Hans Rosling’s animated scatterplots or Gapminder where you know he talks about demographics of countries move and increase in economic status. And if you squint, even if those types of visualizations are really confusing, you know, hundreds of dots moving together, you tend to see trends where a cluster of these countries, even if they’re a little scattered, happen to have the same economic growth, they tend to stick together and become more of a unit in your mind’s eye.
JS: Right. So let me ask you this question because there are probably people who are listening to this discussion and thinking, oh that’s interesting, and I can see how that’s true when I look at the Rosling bubble plots move how this cluster, you know, I kind of see this cluster of dots moving. But I wonder if people are also thinking what does this mean for me as a creator of visualizations. So if some were to say to you, okay, I’m an interactive DataViz developer, how do I take the findings from your paper and apply it to my work?
NE: Yeah, it’s a great question, because our study is relatively basic, so there’s clearly some explanation needed to say how do you apply it in practice. I think the basic finding from our study was confirming something that I think many of us already knew, sometimes that’s research becomes, and in this case, it is, that animation is extremely powerful. When we had elements moving in our study that overshadowed everything, so elements that move together are much more tightly visually grouped than elements that change sides or change color or even are close together in space. So animation is maybe the strongest visual cue you can use in a data visualization, which means that you need to use it with caution. I mean, I know that’s with Uncle Ben says, with great power comes great responsibility, so be careful with your data visualizations whenever you use animation, because it’s going to grab people’s eye, that’s the clear thing.
JS: Yeah, that’s really important. And just as a parenthetical, when you said that’s what Uncle Ben says, my actual first thought was not Spiderman, my first thought was, that’s what you guys call Ben Shneiderman, but maybe it’s a [inaudible 00:22:57].
NE: Yeah. You think Shneiderman, that’s true.
JS: Okay, so this is great, because I think when I talk to students about static visualizations and ask them to identify the things that draw their attention, the big one that stands out is color. But when we move from a static visualization to an interactive or animated world, we may need to be thinking about how the motion may supersede some of these other characteristics that, in a static world, are the things that pop out to us.
NE: Yes, absolutely. The other finding that we uncovered was that we were able to confirm that the original formulation of this law of common fates where they basically said in a much more generous way that anything that changes together will be perceived as having a common fate, whereas in data visualization practice, many of us have taken this to mean that things are animated, they move together, that’s when the law of common fate applies. But our study was able to show that it’s actually not just animation when things move together in the same speed and direction, but also when they change color together or when they change size together. So the grouping strength was stronger, not as strong as moving together, but still strong enough compared to all of the other grouping variables. So that means that you could, for example, if you wanted to use a data visualization and use animation to some degree, but not as strongly as having elements move together, you could have them change color together or change size together. That would be another cue that you could use, so we’re adding to the arsenal of data visualization designers.
JS: Right. Great. So one last thing is on the Maryland human-computer interaction lab, you have a symposium coming up and I wanted to give you a couple of minutes to talk about that and where people can find speakers and more information about it, because I’ve attended in the past and it’s a really great event.
NE: Yes, we’ve done this for 36 years, so it’s [inaudible 00:25:09] since the beginning of the lab, so we’re longer running than many other data visualization conferences in the field. So yes our annual symposium is happening at the end of May. Right now, we’re trying to figure out exactly how to proceed with it. It’s probably going to be pre-recorded talks with a bunch of popular science style blogposts about the research. All of the research is going to be about work that has happened in the last year in the research lab. A lot of it is students presenting. I’ll be giving a keynote, speaking about data visualization for the blind. So that’s a lot of this you heard today will be expanded upon and explained, but there will be also interesting other talks given by my fellow faculty members and students. So it’s a full day thing. Usually, as I said, we meet in person. Of course, now, we are rethinking and reorganizing this, but still, I encourage you to keep an eye out. All of the information is available on the HCIL website, so that would be hcil.umd.edu.
JS: Great, yeah, I will post the link to that, so people can take a look, it is a great event, and with pre-recorded lectures and blogposts, maybe there can be a different type of communication between the speakers and the audience, so that’s great. Well, Niklas, thanks for coming on the podcast, it’s been great chatting with you, I love this work and I look forward to, at the very least, watching your keynote address because I’m interested to see what other thoughts and more research you guys are doing. So thanks for coming on the show, it’s been great.
NE: Absolutely. Thank you.
And thanks everyone for tuning in to this week’s episode, I hope you enjoyed it, I hope you learned a little bit about how to make your data visualization accessible and how to think about animating your data visualizations or at least reading animated data visualizations when you see them out there in the wild. So again, stay safe, stay healthy, and until next time, this has been the PolicyViz Podcast. Thanks so much for listening.