Welcome back to the podcast! We turn to 3D data visualizations in this week’s episode! I talk with Dr. Tyler Morgan-Wall, who is the developer of the mapping and data visualization package rayshader along with the raytracing package rayrender and several other R packages (the rayverse). He is a passionate advocate for 3D data visualization, open source software, and reproducible workflows in dataviz and data science. He has a PhD in Physics from Johns Hopkins University and works as a researcher at the Institute for Defense Analyses in Washington DC.
We talk about Tyler’s background in physics, his work developing the rayshader package, and his run in with Edward Tufte.
Support the Show
This show is completely listener-supported. There are no ads on the show notes page or in the audio. If you would like to financially support the show, please check out my Patreon page, where just for a few bucks a month, you can get a sneak peek at guests, grab stickers, or even a podcast mug. Patrons also have the opportunity to ask questions to guests, so not only will you get a sneak peek at guests but also have the opportunity to submit your own questions. You can also send a one-time donation through PayPal. Your support helps me cover audio editing services, transcription services, and more. You can also support the show by sharing it with others and reviewing it on iTunes or your favorite podcast provider.
Welcome back to the PolicyViz podcast. I am your host, Jon Schwabish. Now if you’re anything like me, you’re probably not a big fan of using 3D effects in your data visualizations. Whenever you see that 3D exploding pie chart, everybody makes fun of it, it gets a lot of critique on social media, but there are times when 3D can be useful. And so, on today’s episode of the podcast, I am excited to have Tyler Morgan-Wall as my guest to talk about his use of 3D and animated 3D in his data visualization efforts. Tyler created the Rayshader package in R, and he uses that package to create what I think are some pretty astounding visualizations. Tyler also has a background in physics, so he brings that background to the development of the package and to his work more generally. So we talk about his work, we talk about his background, we also talk about a run-in he had with Edward Tufte making fun of one of his recent data visualizations. So check out this week’s episode of the show. I think you’re going to learn a lot if you’re an R programmer, I hope you’ll go check out the Rayshader and the Rayverse packages. So you can create your own 3D in useful ways and helpful ways, and to help people better understand your data. So here is my conversation with Tyler Morgan-Wall on this week’s episode of the PolicyViz podcast.
Jon Schwabish: Hey Tyler, how are you? Welcome to the PolicyViz podcast.
Tyler Morgan-Wall: Great, thanks for inviting me.
JS: Very excited to have you on the show, very excited to be doing a nighttime podcast, not one of my regular things. So I’ve got my glass of whiskey here, for those listening, you can hear the ice. If you’re Steve Wexler, you’re mad that I have ice in my whiskey, but I like my whiskey cold, so that’s what we’re going to do tonight. Tyler, I kind of stumbled upon your work a couple of months ago from this explosion, you had this really cool visualization that Edward Tufte pounced on, and we’ll talk about that in a little bit, but I don’t want to spend too much time talking about the negative, I want to talk about your amazing work. And usually, for podcasts, I’ll sort of introduce people and then just go to the tape, but I actually feel like you have a really interesting background, so maybe you could talk a little bit about your background. And then, because I know you have this background in physics, and then, maybe just segue into the DataViz work that you’ve been doing, and how those two sort of merge together, because it’s really interesting combination of skills.
TM: Yep. So I got my PhD in physics from Johns Hopkins in condensed matter, basically, superconducting quantum nano devices. So experimental, a lot of lab work, I really enjoyed working in the lab, but like many people who work in physics, I ended up kind of pivoting to data science work afterwards, like the, quote-unquote, traditional path, where it’s a lot of kind of orthogonal schoolwork to then end up doing data science. But in doing that I had a lot of sort of analytical training, but I ended up working at a place in DC called the Institute for Defense Analyses, which we do a lot of work with the government for data science work and analytical work, and part of that was I was surrounded by statisticians. And when you’re surrounded by statisticians, it’s inevitable that you end up learning R. So I didn’t really have a background in using R, I’d used some Python in my graduate studies, but I ended up starting to use R. And then from there, I started building some packages, and then I found out this is really great, it’s really easy to kind of take a package and then produce it, so other people can do reproducible workflows, and make it so other people can use your work a lot easier than like sharing scripts or Python scripts or stuff like that. And then at one point, I had just come back from the RStudio conference, presenting on a kind of dry statistical package that I had been developing with my work, and I just had the desire to do something more fun. And I’ve always been interested in mapping, and so, I decided to try to produce a package that would create maps in R, and that is how I started writing the Rayshader package.
JS: So that’s really interesting that like you went to maps right away. Was there a reason, like you have this incredible background, but it’s not in cartography.
JS: Was it maps, just like, oh, maps are great, maps are cool, everybody loves maps?
TM: Yeah, a part of it was just like I had always really, out of all the kind of areas in data visualization, I had personally just always enjoyed a good map. But like, it really was more just that I wanted to learn more about cartography, and I have a specific sense of what I think I thought an interface to make maps would be programmatically. And part of my desire to write Rayshader was I originally looked around for something like what I had in my mind, which is basically building up maps based on layers that all of the maps aspects really come from the elevation data. There’s a lot of tools like QGIS, and a lot of GIS software, which are really complex, powerful tools, but not really are focused or have the focus on kind of a programmatic interface. So I really like that kind of reproducible workflow, when working with DataViz, I think that’s a really important thing to have, and I think it’s really good. If tools support the reproducible workflow, I think they eventually become sort of higher quality, in that you can, you know, you’re less likely to kind of make mistakes along the way, it ensures kind of better end product. And also it ends up being, I think, a lot less work for the practitioner – if you have a tool that you can just kind of change the data source and you get an identical visualization now, but just with a completely different area or something like that, there’s nothing really like that for maps that I found. So I was like, I’m going to start building something. The second part of Rayshader was the – I’d never really found a tool that was really focused on making beautiful maps. It was mostly focused on sort of putting together maps for – I’m not saying you couldn’t make beautiful maps in something like QGIS, but the focus is more on the technical cartography aspect.
JS: Yeah, right, I got you, yeah.
TM: So there was nothing really out there that was really focused on that, and I thought, hey, path tracing or ray tracing, I’ve seen lots of really beautiful stuff made with that. So maybe if I combine sort of cartography with ray tracing, I would get something really cool out of it, and that was my thinking going into it. And that’s kind of where the genesis of the name Rayshader was, it was ray tracing plus hill shading, and ended up being a really boon to Rayshader that the rayshadercom was available.
JS: Right. You’re like the Kleenex of the mapping world.
TM: Yeah, when I saw the first person use Rayshader to refer to a map that wasn’t made with Rayshader, like, oh, I’m going to ray-shade this, and with some other thing, you know, some other program. I’m like, oh wow, I’m the Kleenex of ray-trace map. So that was a, yeah, it was a really good name, and sometimes you just hit on a really good name, and you don’t really realize it, and that can really be a boon, especially early on. Because it was one of those things where I think people just kind of knew right away, like, oh, like, this makes maps with some form of ray tracing. And having a name that kind of tells you what the program does, is just, you know, it just really helps I think from the marketing aspect. I found it’s a big part of a successful kind of package for DataViz.
JS: Yeah. So I want to come back to this idea of building packages. It is interesting to me that you sort of take this approach of I’m going to build a package that people can install and use, as opposed to building out a – it sounds like, at least, and correct me if I’m wrong, it sounds like you start on the side of packages, as opposed to building out sort of a set of scripts publicly, at least, and then getting to a point where people are like, this would be really great if it was a package, and then you build the package. It sounds like you take what I sort of view, and I could be wrong, as the kind of opposite approach or steps to that.
TM: Right. Well, that I think is the great thing about the R ecosystem is that unlike a lot of programming languages really do kind of work their bread and butter is kind of sharing scripts but R has this packaging mechanism built in, where that really is just sharing scripts, but in like a better and more standardized way, with kind of lots of checks from the CRAN, and stuff that makes it much easier for the end user documentation built-in. So because I’ve done this work with my job building packages, I had kind of gotten over that hump of learning how to do that, how to transition from scripts to packages, and because I knew how to make a package, and I knew it – and it really isn’t that much of a jump from writing scripts to doing that, then it just, to me, it’s the first step is, hey, if I’m going to share this with anyone else, I’m going to make into a package. Because then it makes it, one, a lot easier to install; two, it gives it an API, people know how to call the documentation and help files. I can get this great environment of like building package down websites for documentation, because I can run a CRAN check, the checks that the CRAN repository they use to kind of ensure the software quality that it runs on multiple systems, I can run that and know that it runs on Macs, Windows, Linux, and just a lot of benefits get from packaging that you don’t necessarily get from scripts.
JS: If you had your way, if you were the R chancellor or whatever we would call, God of R…
TM: Benevolent dictator.
JS: The benevolent dictator of R, would you have everyone instead of sharing scripts, would you have them write packages, or do you think sort of works the way it is, some people just share scripts, because they don’t. I mean, you sort of blew by the fact that, like, you have all this documentation, you have a help file, and you have all these things, like, that’s not easy to do, and it takes a lot of work and time and effort, and you’re providing that to the community. So if you had your druthers, would you say everybody should for every script that they’re putting publicly should be a package, given, of course, that they would have to write also other stuff.
TM: Yeah, no, I would say, I mean, so there’s kind of like a certain level of how much do I intend for other people to look at this as kind of learning material.
JS: Right, okay, yeah.
TM: And how much do I think of it as a tool. So if I look at this as like a tool to create some sort of standardized output, like, Rayshader produces 3D maps, or, Ray Render renders 3D scenes, or, some, you know, it’s a level of both complexity and kind of the intention. Whereas if I’m just sharing a script to show, let’s say, how I made an individual DataViz, like, let’s say, this is how I made this specific map, that’s obviously, you don’t need to have a package for that. This is like, here’s how I made this specific visualization, that’s the perfect opportunity for sharing a script. But if I mean for it to be a tool for other people to use, then yeah, I think packages are kind of the best way to deliver that to other people [inaudible 00:12:15].
JS: Yeah, that’s great. So let’s go to these two concepts, really, of 3D and animation. So 3D, I think oftentimes gets a raw deal in DataViz, but usually, appropriately so, and it’s sort of that, you know, those Excel 3D cones garbage thing. So I just want to give sort of just open it up, and like, your thoughts on 3D and your perspective on when it’s useful.
TM: So I think 3D kind of suffered from a chicken and the egg problem, because early on computationally wise, we didn’t have the computing power in the early 90s to produce like real 3D visualizations. I mean, 3D wasn’t really a thing, so a lot of the tools we developed on early on, like the Excel and the bad code, sort of 3D visualizations, were really these kind of 3D bar charts or 3D pie charts, like, they weren’t real 3D in the sense that the data wasn’t 3D, it was window dressing. And I think from years of having only that capability, and sort of, like a meme of 3D charts are bad kind of permeating DataViz, was because the tools only supported and early on bad DataViz. So from that point, toolmakers weren’t going to put in the effort to support nice data visualizations that are 3D. So if you don’t have anyone pushing the tools, then all you have are these kind of poor 3D data visualizations. And I think a lot, for many years, we were kind of, the major tools didn’t really support 3D in any real good way. I think in the mid-2000s, what happened was, people, like, individual, very talented people started using tools like 3D modeling tools, and renderers like Blender that started to create really kind of striking visualizations. I know, one of the first people to do 3D maps, I believe, was Scott Reinhart of The New York Times, and he produced some really gorgeous maps from that. But, and each one is kind of like an artisanal – the tool is not designed for that, it’s a renderer, it’s designed for people doing CGI, moviemaking, 3D modeling. It is not a tool meant for data visualization. It’s not like built in to an ecosystem that supports data very well. So for a while then we had this kind of artisanal period of people being able to build 3D visualizations, but using tools that weren’t really meant for that. And I came in and really wanted to create some tools that would natively within R support producing these sort of visualizations that you only really saw, like, advanced 3D rendering software like Blender create before, with only a couple of lines of code. So that was kind of my goal, originally it was to create maps, and actually, so originally, it just was to create really cool looking 2D maps. But there was a point where I suddenly thought, like, hey, this would be really easy to make 3D, so I sat down, and I just made one of these 2D maps that I made with like ray tracing and these color hillshading algorithms, and extruded it to 3D from the 2D map, and I remember just looking at it being like, oh my God, this is amazing, like, literally, I’ve never seen anything like this before. And it wasn’t that complex at the time, it was just like, hey, I’m just mapping this texture onto this 3D surface. But I realized that like, hey, this is something I’ve only really seen in kind of, like, bespoke hand drawn diagrams from the USGS. It’s actually a very common sort of aesthetic, like a lot of geospatial work is kind of these slices through the Earth. But no tools really supported that, so a lot of these things were like handmade in Illustrator or hand-drawn. And once I sort of created a tool that could make these from the data directly, I thought it was really cool, and then releasing it to the public in a package form, a lot of other people started using it, and started making some really cool stuff with it as well.
JS: Yeah. Let me just give folks a sense. Can you give me a ballpark of how much computing power and how long it took for that first one to go from, like this 2D map, to building it up in Rayshader, like, when you click go, when you click run, I think there’s probably people listening to this and like, well, my Mac can handle something like this, it’s building this spinning globe with all this stuff. But I’ve seen it work, and I’m sure that’s not the case. So like, can you give folks an idea of what they can expect when they dive into the package?
TM: Right. So there’s actually two different packages, so Rayshader – so there’s actually multiple rendering methods, and that’s been half of the difficulty in doing this is you’re developing these packages. I actually have to read a lot of computer graphics literature to figure out how to write the software. I mean, I’m doing kind of double duty with a lot of this stuff, because not only am I trying to prophesize why 3D data visualization is good, but I also have to write the tools at the same time.
TM: So it’s having to do the technical part of writing the tools, and then also create engaging visualizations. So it’s a lot of balancing that back and forth, but I would say, the actual computing power isn’t really that high. I think most modern computers have enough computing power to support the basic stuff that Rayshader does, which is creating the sort of basic 3D maps. The hard part is when you use – so Rayshader has a function called render high quality, and that calls ray render, which is a high quality path tracing renderer. And that can take a while, like, that’s one of those things where an individual frame could take 10 minutes to render, because it’s using advanced – it’s basically simulating how light backs through the scene, so it’s actually bouncing around with the equivalent of photons, and then drawing the scene from those. So it’s a very complicated algorithm, so that can take a while. But the normal 3D plots, you can make a 3D plot instantaneously. It’s just if you want these like really slick looking 3D ones that…
JS: Yeah, [inaudible 00:18:49]
TM: Yeah, then it can take a while, but that’s computational work. I would say, lines of Codewise, we’re still only talking like a dozen or two dozen lines of code for even the most complicated things. If you’re looking at my GitHub gist page, so the visualization that went viral and reached the number one page or the number one slide at the top of Reddit and that Tufte tweeted about, that was about 40 lines of code, and it’s not really dense code, you read through it, and it really isn’t that bad. So computational, I mean, that’s something, if you really were interested in speeding up, you could always even rent out an AWS server. I’ve had people do that, they’ve actually rented like AWS servers with 32 CPUs or 64 CPUs, and rendered something really quickly. I just will render something overnight and, like, with my computer, that’s – I just take eight hours to be like, okay, this is my allocation, 360 frames, 30 frames per second, that means I have a minute to spend on for frame approximately, so then I just scale the quality of my render down to reach that timeframe and render that, so that’s how I do it.
JS: Yeah. So I do want to get to the this Tufte thing, because I think it’s…
JS: Example of a lot of different things, but clearly you building the package is built on your physics background. Did you get to the point where you’re like, well, the physics here is like not totally perfect, so I’m going to spend a lot more time figuring it out, or it’s good enough, but like, at your core, as a physicist, you’re like, ooh, this hurts me to like, cut this little corner here, like, how did you balance that like?
TM: Yeah. So there’s actually – so the original rendering method that I use, which was just kind of the out of the box, sort of, more traditional CGI approach that wasn’t based on path tracing, that definitely had some kind of workarounds, where you kind of hacked to get something looking physically accurate. There was one part where I had the shadow that was cast, that was just me, basically hacking together like a darker version of the background that underneath, there wasn’t a real shadow, there wasn’t any actual lights going on, that sort of not looking kind of, quote-unquote, realistic really motivated me to develop the path tracing approach, because there you can get the things where it looks like you have the actual sun is shining down, like, there, you could actually see this is where the sun’s position is, the sun’s size, like, you could make calculations based, you know, for like solar panel coverage based on that. And the path tracing stuff is actually really, it’s physically based rendering is what it’s referred to as, PBR. And what’s nice about that is, so it’s a Monte Carlo method that converges on sort of looking physically realistic, if given – you take a number of samples, there’s Monte Carlo method. So you take high number of samples, but eventually, it all integrates out to looking like it’s a photograph; and the nice thing about that from debugging, actually, from a programmer point of view is if I render something, and I’m like, hey, that doesn’t look right. I can be like, oh, it’s because I got, there’s a bug somewhere. So it’s actually easy to debug, because I’ll just, like, take a picture of something and say, like, hey, that shadow looks wrong, and I’ll actually go in and be like, oh, it’s because I got some sine or cosine wrong, and it’s in the wrong angle. And a lot of my early debugging was like, hey, this doesn’t look right, and it’d be like, oh, it doesn’t look right, because it’s not right. But now, it’s actually my physicist – the physicist part of me is very satisfied, because physically based rendering really does produce results that are pretty close to reality, and there’s lots of complex integrals related to radiance. A lot of the techniques actually came from nuclear physics, and you can actually read the physics back papers, and that’s part of the physics background is interpreting all this. So I’ll go back and read these papers to figure out where it’s all coming from, and be like, oh okay, this makes sense. And I know, as a physicist, I mean, really, like, I know, economists also kind of have this stereotype where you look at somebody else’s field and you go, like, oh, I can do that, like, I can understand this. And I’m lucky in this sense, in that because it’s physically based rendering, I can actually read the CGI papers and go, like, oh, this is physics, so I got you, it’s completely different genesis.
JS: Yeah, we could still talk, yeah.
TM: Yeah, exactly. So yeah, no, that itch is well scratched by this rendering stuff.
JS: That’s great. That’s awesome. Okay, so let’s turn to Tufte. So I’m going to just read a little bit from my notes here, so people know what we’re talking about. So, in September, you created this really cool animated map, it showed submarine fiber optic cable network around the world. And so this, just to sort of paint the picture for people, it was this spinning globe and it had basically lines around it, if, you know, hopefully, folks sort of get the idea there. And then Tufte put out this tweet, and I’m going to read this tweet for you, and then we don’t have to spend a ton of time bashing him, although I’m happy to do that if we want to spend some time. So this was Tufte’s tweet, I’ll link to it in the show notes so people can go look at it. So Tufte tweets out: Discovery of backwards Earth rotation => Nobel Prize. Diameter of each optic cable apparently about 100 miles! On land, drop-shadow cable stacks reach far into outer space. Displays of space junk have same problem. Perhaps thinner lines reduce the massive exaggeration. So yeah, I mean, I guess, I’ll just put that out there, and like, what was, I guess, first question is like, what was your reaction when you saw him retweet that?
TM: So, my first reaction was, I’ve made it, Tufte is criticizing my tweet, like, this. From that point, it really was like, I looked at it, I just scanned what he wrote, and when I scanned it, I didn’t see what I was afraid of. The only thing that I was afraid of was he would tweet something about disparaging 3D visualization in particular, because then just kind of – there’s a lot of people who follow Tufte who think 3D is bad, and I just didn’t want to have to fight the battle of being like, oh God, no, I have to somehow debate Tufte, or, at least, people who follow him like that 3D is fine. In fact, I read that, and I did not see anything that was related to 3D, in particular. So I breathed a sigh of relief. I was like, okay, he’s having some kind of snark about that, whatever, oh thank God. And then, I kind of read it over a couple of times, and I was like, okay, a lot of people say, like, I had some other data visualizations that had gone viral earlier on, where the earth was rotating the wrong way…
JS: The wrong way, yeah.
TM: Where apparently you’re not allowed to spin a globe in two different ways. You’re contractually obligated, always the…
JS: [inaudible 00:25:52].
TM: One way and not the other. I never considered this to be a physically accurate representation of Earth’s orbit around the sun. You might also notice I had no clouds…
JS: Right, that’s [inaudible 00:26:04] that’s right.
TM: I didn’t render the moon going in and out.
JS: That’s right.
TM: There’s a lot of things that wasn’t physically accurate…
JS: I’m sure NASA is super pissed about the whole thing.
TM: Yeah, I don’t think anyone was looking at this visualization, where I’m like, you know, I could really understand this submarine cable map thing, but what planet is this on, obviously, not earth, because it’s rotating in the wrong direction, I’m very confused. A lot of people hosting like, hey, this might confuse some people, because the Earth’s rotating the wrong way. Just like, who’s good, what are they confused about, like… Anyway, that’s beside the point. But then I looked at it – so he mentioned the thickness of the cable, which I thought was a little confusing. So these actual submarine cables are about a garden hose thick, which I don’t know if you’ve ever tried to see a garden hose from space, it’s very difficult. I’m sure people would love to have a telescope in space of that resolution. But the geospatial work mapping is full of abstractions, that’s a DataViz. We have lots of abstractions, it just more confused me that he would focus on that because I’m like, what is this, is this – to me, it was like, I had a hard time figuring out if this is like a funny snark thing with Tufte, because I don’t know Tufte, I don’t know. He might have just seen this as like, I’m being funny. But the fact that he used the word drop shadow, I’m like, oh, he’s kind of interpreting this from a 2D perspective, because this isn’t drop shadow, this is path tracing, that’s actual shadow.
JS: Shadow, right.
TM: Yeah, it’s the shadow. For me, I could see him kind of interpreting it from like the 2D perspective, the fact that he used that, the fact that we have these kind of abstractions that we use in GIS that might not represent the same, you know, the actual size of things. I mean, people don’t complain about the US interstate system. If maps of that, the roads being the exact same size, it’s a reality that the earth is huge, and humans are very small, and human scale things are very small. So yeah, obviously, we have to exaggerate stuff. But I would say, actually, the big thing that came from Tufte’s tweet, the thing I enjoyed the most was really the great outpouring from the community of people who said, like, hey, what are you doing. Like, at the time, I had, like, I don’t know, 9000 followers, and maybe Tufte looked at that and be like, hey, this guy’s like, 1/10 of a Tufte, because he had 100,000 followers, that makes him a fair game, [inaudible 00:28:26]. But I think a lot of people saw it as like a punching down. But from there so many people came out, and I was working at the time, so I was like, okay, by the end of the day, maybe I’ll respond. But within two hours, anything that I could have said, the community had said all like, you know, people had come out and been like, this tone isn’t great, like, what are you doing, like, this is a great visualization. Lots of people shared it with saying, like, wow, this is like I had never known, this is what ran the internet, which I think is a sign of a good visualization. People being like, oh my God, I had never known this, and now I realize that this is the infrastructure that the Earth runs on for the internet. I didn’t really have any negative responses other than people saying kind of looking the sort of same criticisms, Tufte was being like, I think some people might misunderstand this based on some other things, and I don’t think anyone really did.
JS: Yeah, I mean, it’s interesting, because if your goal with that visualization is, I don’t know what the companies are, but to lay, if you’re working for a fiber optic cable network and seeking to lay a new line, then that’s not the visualization you want to use for that particular job.
JS: But if you just want to get this sense of what the world looks like, in this sort of hidden layer, that we all just kind of take for granted, that visualization, and I hate to say, tells a story, because I have a problem with that phrase, a different story, a different whole conversation, but it does tell that story pretty quick and intuitively.
TM: Yeah, no, and I think that’s why it kind of hit such a nerve. The fact that a data visualization, I mean, I’ve had some things go viral on Reddit before. Some things will reach the very bottom of the front page, because data visualizations, I mean, I don’t think most people think about them too much, but hitting like the top spot, like, opening up the Reddit app, and seeing it right at the top, that’s the front page, that’s above the fold, whatever the modern version, above the scroll or whatever that is.
JS: Yeah, above the scroll, yeah.
TM: I see that in like the kind of top spot, I’m like, this had really touched a nerve of people being like, oh wow, this has made me understand my world a bit better. So that to me is like the end goal of a good data visualization is to get somebody to understand some data a bit better, and to really learn something. And there’s kind of like these technical, I think a lot of data visualization gets criticisms, often get caught up in kind of the technical considerations of what is the best way, a lot of mapping is like projections. And I think in a lot of cases, I would say, the negative kind of technical considerations, often I think don’t come into play, if you’re tailing like a, if you have a really engaging dataset, which I think is what the submarine cable map really was. I mean, this data had been freely available, and had been visualized in kind of a flat earth before, I mean, you could see it, there are multiple websites that showed this data. But I think kind of showing it in the sense of more of a kind of realistic view of what the Earth looked like in this 3D view, kind of made it a little more grounded. It wasn’t just lines on the planet, it was showing how we were all connected with this web, like, I think in a lot of a cases, kind of these network visualizations, the abstraction of the lines over the earth, the fact that with many maps, the lines represented by the countries and the coastlines, you have to sort of be able to separate them to the reader. So moving into 3D, I was able to actually physically do that. The lines floated above the earth, so then you’re able to see that, and it was sort of using this technical trick of 3D to kind of get beyond that, sort of, like, oh, what’s the data here, and what’s the point of the visualization. I don’t know why this one kind of struck a nerve as much, but I think, just generally, 3D can create these really engaging plots, because it can create really beautiful visualizations. And I think that can really kind of enhance sort of these, even data that not necessarily needs the 3D to represent it, because yeah, you could just plot this on a 2D map, and it would represent the exact same thing. But seeing it in this 3D view, I think just really made it that level of engaging that people could be like, could actually see it and sort of be like, oh, this is my earth, my planet. I don’t know why things go viral.
JS: No, yeah, it’s impossible to know, but I think you’ve touched on a few of them. And I think, I mean, I’ll just give you my two cents. For me, I think why it struck a nerve, I mean, in the DataViz community, it struck a nerve, because of the Tufte thing. But I think, more generally, it struck a nerve, I think, your choice of colors was something that made it pop. So they were very vibrant, almost fluorescent colors, so I think those popped. But I think your point about, there is something about the animation of the globe spinning, that in some ways, it creates this reveal of this underlying network that we see across the whole world. And if you see it on a 2D static map, essentially, it’s essentially lines across this thing, but there’s something about that animation, as it spins, you’re like, oh, I can track this bright green line from New York to Berlin to Moscow, and I can see that, and it’s, I have to sort of wait for that to happen and sort of reveals itself, I think that’s part of it.
TM: Yeah, and that actually brings us to the next point I kind of wanted to talk about 3D was, so 2D DataViz I think has a lot in common with – has a lot of crossover with illustration. So a lot of aspects of art, color theory and that, you need to use when you’re doing DataViz in 2D. And I think a lot of illustrators are often very good at DataViz when they get into it. I know, like Allie Torbin, I think she’s a very good illustrator, and she makes very good data visualizations. But I think 3D is interesting, because it actually has a lot in common with cinematography. You need to actually use a lot of techniques that cinematographers use, in that, like the slow reveal. In this ray render, you actually have to worry about where your foot, like, depth of field, where you’re focused, you have to worry about lighting, you have to do light design. And it’s actually a lot more related to how you film something and reveal something in an actual movie or TV show, than just the kind of flat static nature. Now, you also have interactivity, so a lot of – those are kind of two choices you have in 3D, whether you want to make it like a fly through video game style experience where you’re flying through the space and revealing it that way, or, you can do this animation approach. I think interactivity is a bit harder for technical considerations, just because, one, you have to deliver big 3D models to people, and that can be hard from a just pure web bandwidth point of view, but also, it’s just hard because a lot of people don’t really work in 3D space that well. I’m part of the Nintendo 64 generation, so I know how to go back, and side to side, up and down, but I think a lot of people, and even a lot of people my age don’t really do well in 3D, because it’s hard, you have lots of degrees of freedom. So interactivity is good in some senses in 3D, but it’s actually much harder to implement correctly. But movies, I mean, one, it’s easy to share a movie on social media. You can embed a movie on Twitter. All major social media sites support embedding videos, which is why I think animation is really my preferred method of sharing 3D visualizations. And two, then, it allows you, the creator, to tell a story, rather than kind of having the person walk through what kind of story that you think you bought them to tell. It’s much harder than 3D, because you just have a lot more things, places they can go. One of my visualizations that I made this past year was I actually made a VR rollercoaster ride, it’s sort of a demo proof of concept of the technology where you’d have a 3D dataset, and you could actually, you’re taking on like a monorail tour through your data, and that kind of bridge the gap, in that, it was actually just a movie. So for me, I was able to render all the frames, but it was a movie rendered with a 360 view, so if you had VR goggles, you could put them on, and then kind of look around as you were traveling through the scene, which kind of splits the difference. I really like that – if you look at a lot of 3D tours through data that, like the New York Times have done, they’ve kind of combined storytelling with a sort of 3D tour where, as you scroll, the camera travels through the chase…
JS: That moves, right.
TM: And kind of, I think, this element of interactivity with the VR aspect is kind of splitting the difference, and that’s kind of as far as I would probably go with it, just because it’s like a movie where they can look around. But yeah, generally speaking, I like movies just because really the technology is much well supported. It’s a lot easier for me to put together and it’s really a lot better for I think the end user, at least for now.
JS: Yeah. Well, that’s cool, so I’m now eagerly anticipating your next VR R package in 2022, and I’ll have to go buy whatever the Metaverse decides to put out and let me purchase, yeah. Tyler, thanks so much for coming on the show, this has been great, really interesting stuff. And I’m glad you’re able to brush off the Tufte criticism because that can be a – yeah, that can be a tough pill to swallow, when somebody like that does that. But I’m glad to see this work, I’m glad to see people using it, I’ve seen it more and more now, which may be just me sort of having my eyes open a little bit, but it’s really great work, congratulations, and yeah, thanks again for coming on the show.
TM: Yeah, thanks for inviting me.
Thanks to everyone, for tuning in to this week’s episode of the show. I hope you enjoyed that. I’ve put the links to everything that Tyler and I talked about in the show notes so you can check out the R packages he created, you can check out the tweet from Tufte and the various responses to that, and just more generally, you can go play around with some 3D. So until next time, this has been the PolicyViz podcast. Thanks so much for listening.
A number of people help bring you the PolicyViz podcast. Music is provided by the NRIs. Audio editing is provided by Ken Skaggs. Design and promotion is created with assistance from Sharon Stotsky Ramirez, and each episode is transcribed by Jenny Transcription Services. If you’d to help support the podcast, please share it and review it on iTunes, Stitcher, Spotify, YouTube, or wherever you get your podcasts. The PolicyViz podcast is ad free and supported by listeners. If you’d like to help support the show financially, please visit our PayPal page or our Patreon page at patreon.com/policyviz.