If you attended the Data Visualization Society’s (DVS) Outlier conference a couple of weeks ago, you probably heard that they are going to host the Information is Beautiful Awards (IIB) competition going forward. The IIB awards started in 2012 (but didn’t run in 2020 or 2021) to celebrate “excellence & beauty in data visualization, infographics, interactives and information art.” While many people were excited about the announcement and the reintroduction of this annual awards showcase, I am a little less enthusiastic.

I’ll preface this post by saying that I understand why people love awards like IIB and Malofiej. They give nominees and winners visibility, notability, and credibility-building. Serving as a judge has similar advantages for promotion and free travel (I’ve served as a judge at IIB, Malofiej, IronViz, and other smaller competitions). Plus, the parties are typically lots of fun with great food and even better people.

Cookies with data visualizations from the 2019 IIB award ceremony
Data visualization cookies at the IIB awards in London in 2019.

One other important caveat to my critique here: I have entered my work into several contests. I entered IIB twice (coming in third one year) and won an award from the Department of Education early in my data visualization career (around 2015). And my perspective is also colored by my age and having worked in the field for a while now—an early-career Jon might feel differently.

Why I Have Reservations

Let me explain my reservations about data visualization award competitions while acknowledging that DVS may address some or all of them. My purpose here is to highlight what I view as fundamental issues with data visualization awards in the first place and not to attack DVS, IIB, or any other contest organizer.

The most important, fundamental problem I see with data visualization awards is that they separate—and then ignore—the data from the visualization. Not a single judging event in which I have been involved has looked deeply at the data or to explore whether the creator used the data correctly or accurately. Before you say that this would add an unreasonable burden on judges or award organizers–which is certainly correct–let’s explore the concept of data visualization awards a little further. If the underlying principles of data visualization awards are flawed, we should ask ourselves whether having awards should exist at all.

Imagine, if you will, two bar charts submitted to an award challenge. Both are the same in every way—number of bars, color, font, layout, and labels—except for one crucial detail: One bar chart uses data easily downloaded from a public website—let’s say per capita gross domestic product from the Our World in Data website. The second bar chart shows results from a year-long study that combines publicly available data, tabulations from an administrative data set, and qualitative results from 300 interviews.

Now—assuming you believe the data used in the second bar chart—which one would you judge to win this award? My guess is that you would give your vote to the second chart. The design is identical, but the data are “better” in the second.

Let’s take it a bit further. Let’s say there’s a new chart that is absolutely stunning in its beauty and form. It has striking colors, loads of great detail and annotation, and unusual, engaging shapes and design (think, for example, the works of Giorgia Lupi or Federica Fragapane) but again, uses a relatively basic data set. Which one wins? My guess is that this one would win this competition over the simple bar chart that uses more complex data.

Let’s do one more thought experiment. Take the lovely, intricate graph from the second scenario and put it up against the bar chart with the in-depth data from the first scenario. But now, the data in the intricate graph is garbage—the creator dropped some observations because they were problematic; they scaled some values incorrectly; or they used a data set that had underlying sample selection issues. Which one would win? Would you even know the complex graph had any data issues at all?

This is the problem—we can’t separate the data from the visualization. Unfortunately, that’s what is so often done in these competitions. When I judged the online entries at Malofiej in 2017, the panel of four judges had to look through roughly 1,000 visualizations across about three days. There was NO WAY we could look at the data of those entries! To make it through the entire corpus, we had to take it on faith that they were done correctly, objectively, and honestly.

The Counterarguments

Some argue that there are all sorts of awards given for artistic content like music (Grammys), movies (Oscars), stage (Tonys), and television (Emmys). But what makes data visualization different from these other areas is that the data is central to the visualization. And a panel of judges can’t know whether the data in a visualization has been incorrectly–with malice or not–cleaned, analyzed, and prepared.

Others might argue that in the academic peer review process, many (probably most) reviewers don’t review the data underlying the analysis they are reviewing. But that’s just one of the reasons why many argue the peer review process is broken and needs substantial reform. Will data visualization awards suffer from similar problems?

Some others argue that awards give credibility and notability, especially for young or new creators. That is likely true (and is surely true for people outside the data visualization field) but I still worry that bad—or even worse, incorrect—visualizations are awarded and celebrated.

It’s also important to know that IIB and Malofiej require an entry fee for consideration (I think both were fairly modest and somewhere in the $50-$100 range). These are modest relative to competitions in other fields (or even, for that matter, submitting work to an academic journal) and are used to cover overhead costs and the like, but it is still likely a barrier to some creators.

Another argument is that “if we don’t hold the awards, someone else will, and they’ll do it worse than us.” The second part of that sentence is debatable, but the first part argues that awards should exist in the first place and, obviously, I don’t necessarily agree.

Is There a Way Forward?

Maybe there is no way forward that will address this fundamental issue with data visualization awards that I’ve laid out here, but here are some ideas worth considering:

  • Require the creator to submit some kind of disclosure that they have appropriately factchecked or analyzed the data before submitting. Maybe they need to answer some basic questions about their data?
  • Require all creators to submit their final data files and code.
  • Require all creators to submit a disclosure statement outlining any funding sources or potential conflicts of interest.
  • Build time for the organizing organization or judges to factcheck or review the data in some basic way. Maybe judges should factcheck the finalists?
  • Segment entries into multiple categories such as interactive vs. static or data visualization tool. Or maybe restrict the entire competition to a single tool such as Tableau’s IronViz competition. Or maybe require the same data set, as in the upcoming DVS State of the Industry survey challenge.

I’m hopeful DVS will meet the challenge, but I worry we are setting up our field to (continue to) view a data visualization award competition as essentially a design competition without the proper investigation of the underlying data.