Jonathon Reilly is an innovative and results-driven executive with over 20 years of experience in product management, business development, and operations. As the Co-Founder and COO of Akkio, he has helped create an easy-to-use AI platform that empowers users to build and deploy AI solutions to data problems in minutes.

Prior to founding Akkio, Jonathon served as the VP of Product & Marketing at Markforged, where he played a critical role in the company’s growth and success. With a strong background in the tech industry, Jonathon held various leadership positions at Sonos, Inc., including Leader of the Music Player Product Management Team, Global Channel Development, and Senior Product Manager. He began his career at Sony Electronics, where he contributed significantly to the development of a wide range of consumer products as a product manager and electrical engineer.

Jonathon holds an MBA in Entrepreneurship/Entrepreneurial Studies from Babson College – Franklin W. Olin Graduate School of Business and a BSEE in Electrical Engineering from Gonzaga University.

Episode Notes

Jonathon | Medium | Twitter

Akkio

How to Lie with Statistics by Darrell Huff and Irving Geis

Data at Urban: How We Used Machine Learning to Predict Neighborhood Change

autoML

Related Episodes

Episode #227 with Max Kuhn

Episode #225 with Julia Silge

Episode #227 with Claire McKay Bowen

Episode #227 with Steve Franconeri and Jen Christiansen

New Ways to Support the Show!

With more than 200 guests and eight seasons of episodes, the PolicyViz Podcast is one of the longest-running data visualization podcasts around. You can support the show by downloading and listening, following the work of my guests, and sharing the show with your networks. I’m grateful to everyone who listens and supports the show, and now I’m offering new exciting ways for you to support the show financially. You can check out the special paid version of my newsletter, receive text messages with special data visualization tips, or go to the simplified Patreon platform. Whichever you choose, you’ll be sure to get great content to your inbox or phone every week!

Transcript

Welcome back to the PolicyViz podcast. I am your host, Jon Schwabish. On this week’s episode of the show, we turn our attention to AI, which, if you’ve been paying attention to anything around the world you know is a big conversation. We’re not going to focus on ChatGPT or DALL-E. We’re going to talk to Akkio cofounder Jon Reilly about the work his firm is doing in this space of AI, when it comes to generative models and data visualization, and trying to bring AI to folks to visualize and analyze their data quickly, and more easily. We’ll see what the future holds, of course, for AI, and it’s an interesting conversation to see how some of the early companies in this space are trying to utilize AI to help folks work with their data better and more efficiently, and ultimately, create better visualizations. So here’s my conversation with Jon Reilly, I hope you enjoy this week’s episode of the podcast.

Jon Schwabish: Hey, Jon, good afternoon, thanks for coming on the show.

Jon Reilly: Yeah, thanks for having me.

JS: I’m really excited. Obviously, AI stuff has exploded in the last few months, with DALL-E with images, and ChatGPT with text, and some of the other new video things coming out. I don’t really know anything about it, to be frank. But you and your company Akkio are kind of in a unique niche about data and AI. So I’m curious to learn more about it, and what you’ve seen happening particularly last few months. But maybe we’ll just start like your background, I think, if I read your bio right, like, you’re an electrical engineer by training.

JR: Yeah, that’s right, yeah, I rolled back.

JS: On a path now to AI, yeah. So I’m curious, what’s that path?

JR: So the path kind of goes squarely through product management really. I think I started designing televisions for Sony Electronics back in the day, you know, video processing, circuitry, primarily analog stuff. And I got into engineering, because I liked sort of the discreteness of solutions, if you know what I mean, like, if you were right about something, you could sort of prove it to everybody, and point to the fact that it worked the way you expected, and that was really satisfying. But pretty early in my career, I was kind of wondering how the decisions about what to build were getting made, and why they were getting made, and so, I got into product management. And then, through product management, found my way into a series of smaller companies that grew and were successful. But I realized I really like the sort of startup side of things. But the transition to AI really happened when I took over marketing at the last startup I was working at, this 3D printing company. And in that capacity, I realized there was a whole bunch of data driven workflows inside of the business where we had a lot of information, but needed to make real time decisions about what to do, so that we could behave optimally, I mean, you know, lead scoring is the classic case here.

There’s a lot of studies that show that if you can respond to an inbound lead in the first 10 or 30 minutes of them getting in contact or requesting a demo or a chat, your connection probabilities are way higher, your sales probabilities are way higher, people appreciate responsiveness. We had way too many coming in, we couldn’t sort them, we couldn’t tell which ones were the good ones. And so, we started looking for solutions that would allow us to sort of tell if a lead was more likely to close than a different lead. And there’s a lot of traditional ways of doing that, like, looking at their behavior or their firmographic information. But really, machine learning models are perfect for that, they pattern match very well. And so, we sat around to start employing some machine learning models. We went to some contractors, and these are largely professional services solutions, they kind of have a data scientist, like, on their side that does it for you. The communication, like, loop problems, and we just kind of realized, while we’re going through this that it would be really nice if there was a tool that let someone who is data competent, who is a subject matter expert in what you were working on, actually build some of these models themselves, and deploy them and put them to use. And so, that was sort of the founding principle behind Akkio, and yeah, it’s kind of a winding path, electrical engineering to product management, and at Sonos, and audio company wireless audio to 3D printing to AI. But it’s always kind of been chasing, like, something interesting, that has like real application that I feel like some sense of like need or urgency around having. And so, that’s sort of been the connecting tissue is like trying to sort of build something that I think will be really relevant to a lot of people going forward.

JS: Right. So tell me a little bit about what Akkio does, I know it’s focused on the AI in the data intersection there, and I’m curious how folks like me, regular folks, we’ll call them regular folks, although people listening to this podcast, are not really regular folks, but how regular folks can use it. So let’s start with what it does first, or what you guys do first, and then we can…

JR: Yeah, so basically, it lets anyone with historic data build predictive models, and understand the patterns in their data, that are driving their outcomes of interest, whatever those are – those are usually key business outcomes that you’re interested in, things like revenue, or churn, or conversion of customers. You basically can feed it historic information, it’ll automatically, just through a process called AutoML, and specifically, search for the right neural architectures and neural architecture search. It’ll find the patterns, surface those to you, so you can see what’s driving your outcome, and then, you can actually deploy those models and use them in real time decision making. So you got sort of two benefits, the first is by seeing the patterns, you can make some strategic decisions; so that might be where to focus your efforts, if it’s lead scoring, you’ll sort of see these types of leads are better than these types. So let’s focus our marketing on this type of lead. But you can also then hook it up to any of your systems in real time, and get a prediction on every record that gets changed or updated or comes in, and if you can act on those predictions any – and these applications are all machine learning applications. There’s like a long history of value in businesses now, but usually, it’s delivered by the data science team.

So what Akkio does uniquely is we make it really easy for anyone, anyone who can work in Excel, to start building these same types of models, seeing what’s driving those outcomes and take advantage of them in their business decision making. And that’s the long and short of it. We’ve recently also been pretty popular because we built an NLP or a GPT-4 actually enabled feature at the front end that lets you transform your data, so you can just make any request in natural language to do a data transformation like reformat this date to an ISO standard or something, and it’ll just do it. Or I think, most interestingly, for people, data visualization, so you can ask it to [inaudible 00:06:59] your chart on your data after you connect it, and it will. And then, we build the data pipeline, between the dataset and that chart, so it makes it so you don’t need to know SQL or be able to code in order to accomplish all of these tasks. You just need to be able to like sort of understand what’s going on in the data, like, the subject matter expert, and ask the right questions to get the insights you’re after, or point it at the outcome you’re interested in, and let it tell you.

JS: Right. So let’s start at the beginning of that process, because I loaded my data, I mean, it has been said to me that regression analysis is machine learning, which is kind of like, okay, I guess, but okay. But how conditional on the predictive modeling is the data that I load into the tool, and where does a tool help me, can the tool help me identify what I’m missing.

JR: Yeah, so the real interesting answer here is you never know if your data is going to support predicting your outcome until you train a model. And so, from the beginning, we assumed a couple of things, because this is our experience operating businesses. We assumed your data was going to be messy, meaning we assumed there were going to be lots of blank values that you were going to have, like, some numbers and category columns, all sorts of messiness is going to be going on in there, because that’s very, very typical. And so, we designed our ML Engine to be robust to that when you’re training models, and we made the process workflow very quick to get into an answer of, like, here’s how well your data predict your outcome.

So the really only conditional piece of it is you need your data to be in tabular format. We’re not going to do any of the PDF extraction, and we don’t process images, we’re just tabular business data. But if you have it in a CSV form, where you’ve got headers in row one for your columns, and then record, record, record, record, we can take it and we will automatically build you a model that is correct to the type of outcome you’re predicting. We do three basic model types, regression, or numerical predictions, classification, categorical modeling, and also time series, which is an area that’s like, typically very difficult for people to work with, is the concept of building a time driven model, and we do that too.

And we’ve put a lot of effort into trying to take your data in the format it is, which is to say, however it comes out of your system, or is in your data warehouse, and make it like pretty much straightforward into modeling without you having to worry about doing a bunch of reshaping of it. But we do have tools, like this natural language transformation tool, and the Auto Clean tool that you can click and we’ll just create some automatic cleaning things that are typically best practices before ML for you in a single click. So there’s a couple of places where we make it pretty easy. I mean, ease of use is our whole value proposition, so we work real hard on that.

JS: Sure. Right, no, right, I mean, I’m always for like not everybody needs to be a coder, and so, make it easier for people. So I load my data in, I can also specify that the code 999 is a missing value, it’s not actual value.

JR: Yeah.

JS: Do I need to do that? Can the, like, how does that…?

JR: So we’re not going to know that bit, so if there’s a specific nuance in the dataset, if you’re missing a value, we actually encode that value as missing. And then, we try and learn if there’s a pattern when it’s missing versus when it’s present. You don’t necessarily need to translate that 999 into anything. We’ll just encode it, and then, we’ll learn the pattern there, and then, we’ll tell you, like, hey, when we see 999, here’s what we noticed is the impact on your outcome. And then, you can be like, oh okay, well, I know what 999 means. So I know, like, what this situation means for the outcome. Or, if you’re just feeding us a new record, and it has that code, we’ll take that field, and all the other features that are called off the record, and we’ll use them to make a prediction.

JS: And when it does that prediction, again, thinking about the person who may not understand, let’s say, even basic OLS regression, they load in their data, does Akkio tell the user, you know, the R squared is such and such, and then kind of translate what that means for people?

JR: Yeah, we don’t start at the depth. So the trick to ease of use in my experience is like progressive unfolding of complexity. So you want to start, like, really simple, and so, if we’re doing a classification problem, the first two pieces of information we tell you are how many times the model got it right as a percentage. And so, a standard training process, you withhold 20% of the data, you train on 80%, and then you predict against the 20, you didn’t show the training process, and you see how well you did at it. And so, we show you that performance, the model is like 95% accurate. Of course, that value could be misleading, right, because if you have an imbalanced class, like, let’s say, like this lead scoring application we’re talking about, like, your leads, 10% of the deals that come in to the front door might actually convert into business. And so, if your model just guessed nobody would ever convert it, be right 90% of the time, and you think you had a really good model, but you have a terrible model. Right?

JS: Yeah.

JR: So then, we get to thinking about that, and the real thing is like, so if your outcome of interest is the rare case, which is almost always the reason to leverage machine learning, because you’re looking for diamonds in the rough, so to speak, then what’s important is like how often when the model thinks it’s going to be a converted lead, or the outcome of interest, is it actually that outcome, like, what is the densification rate versus the base rate in the dataset. So if when the model thinks a lead is going to convert, it does 50% of the time, but in the base data rate, it only converts 10% of the time, you’ve sort of got a real business value to using that model now, right, you’ve densified the outcome of interest by about five times, and so, the second piece of information we show you is how much denser the outcome of interest is even at a 50% decision threshold. I mean, of course, further down, we’ll show you some tools where you can set different decision thresholds and understand different densification, because really you’re making a probabilistic decision. And so, your business needs to account for like, it’s like dollar value capture, but so we start there, and then, you can drill down into the advanced settings and see the full confusion matrix, the F1 score. But everywhere we show you one of those complicated data science terms, we define it for you right next to it, so you can see what’s going on and what it means and which direction is better or worse for that score.

You can even drill all the way down and see the actual model we picked and the other models we compared it against, and how they performed on a relative basis. So depending on your level of advancement, you might drill down into sort of these details, but for most users, we stop there, and then, we move directly into, okay, let’s take a look at what’s driving your outcomes, what are the patterns in your data that are relevant to predicting this outcome. And then, we show you like here’s the fields that are most important, here’s the value in those fields that impact the outcome in which ways, here’s on any given feature, here’s a segment of that feature that’s interesting for this reason, and is associated with this outcome.

And so, we try and present, you know, like, all of this is like, really carefully crafted presentation of information, like, if – and I’m not sure that we’re super mature on this yet, there’s still miles to go. I think we’re probably still more confusing than we should be, to be honest. People tell us that we’re the easiest one they’ve used but presenting visual information about data is hard to do in a simple manner. Right?

JS: Yeah.

JR: And so yeah, we always try and ask ourselves like, for this thing that we’re showing you, doesn’t make sense in the order it’s being shown, and do you get exactly what’s happening at a glance. And then, once we feel like we’re confident in that, we go use a platform that allows you to interview analysts or your target user, and we show it to them, and say, what does this tell you, and if they can’t answer that, then you go back to the drawing board to try again.

JS: To try again, yeah.

JR: Yeah.

JS: So I was going to ask you about the feedback you’ve gotten on this particular part. I want to get to the DataViz part in a second. But are you finding that most of your users are the non-data scientist type folk?

JR: Yeah some, you know, that’s our target. So our thesis is, and this is like what we hear every day when we talk to customers, most businesses have a data science team of some kind, especially most mature businesses, and that data science team is super critical. They’re working on the 10% problems, call them, the most important business differentiated problems there are in the organization. They need really complex, powerful tooling that gives them a fine degree of control over every little bit of the product. And so, their needs are like wildly different than our target users’ needs. And so, they look at our platform, and outside of using it for some explainability about their models, we’re using it for some rapid prototyping, because it’s pretty easy to spin up a model and see if they’re there. They’re like, this tool does not provide me with the dials and controls that I need to do my job, I’d much rather be working in a Jupyter notebook or some other like more technical platform.

So we’ve intentionally shipped the product specifically to our user who’s not that technical. And so, our goal is to enable the 90% problems, which is like the long tail of people working in your business operations, with like in marketing, sales, support, HR, Finance, to start to leverage machine learning in their daily workflows. And that’s our goal, and any machine – there’s so much low hanging fruit in value extraction from data. And then, I think people are waking up to the fact that you can use these AI supported tools or ML, AutoML tools to start to do tasks on an individual basis that don’t require a huge project spun up around them. And that’s been like a lot of what we’ve seen with sort of emergence of GPT, and some of these other generative tools, they help anyone working in text or image creation do their job more effectively. We’re making a tool that helps anyone work with data do their job more effectively.

JS: Right. So I want to come to the other tools in a second, but I want to focus a little bit on the DataViz piece, where there’s really two parts to it, really what I found really fascinating, obviously, from the DataViz side is a user can go into Akkio, and tell it aspects of the data, can describe the data, can also describe, as you mentioned, you could describe, I want this in this date format, and just do it for me. But also there is, it seems, I mean, I haven’t really gone in and used the tool too much, but like, there’s a way to build these narrative charts and graphs to sort of build more of that storytelling piece, as you were kind of talking about a little bit earlier.

JR: Yeah, so really, like, really two pieces in there, and so, we have this idea of a report, and the report is like a – it starts as sort of a blank canvas, and you can save any data visualization we make anywhere for you in the entire product to the report in any order you like, and when it’s saved in this report that’s shareable with people inside or outside of your org if you so choose, and it’s a data pipeline back to your data warehouse, so as your data environment evolves, that report grows right along with it. And so, like, it’s still on the user, as the subject matter expert to say, show me, well, I mean, you can ask it, show me three interesting things about my dataset, and it will. But those might not be the topical interesting things that you care about.

JS: Right.

JR: So it’s up to you as the user to have some idea of what your objective is, right? Like, that’s what’s like a, first and foremost, important is like, what are you trying to accomplish as a business, and make sure that the dataset that you’re working with is relevant to that, you can’t have some wildly out of bounds dataset, but once you’ve got that covered, it’s pretty straightforward. You can say, show me this relationship, and that relationship, show me this over time, filter this down to show me things that look like this, and it’ll do that all automatically for you. And then, you can save all of those things in a report and reorder them, and then, those will be live pipelines right there, like, really easily. And then, the lift to like sort of visualize your data is the lowest it’s ever been, as opposed to trying to make these charts in any other tool, I think, like… And by the way, everyone is going in this direction. This is not just going to be us using large language models to [inaudible 00:19:51] you’re asking to in the charts, everyone’s going to do it.

JS: Yeah, everyone’s doing it, yeah.

JR: But it’s making people wildly more efficient in terms of their execution. But it’s not just like, show me a graph of what’s going on in the data today, but you can also then take any of the driving factor analysis that comes with training a machine learning model, put that in the report too, and then, if you’re using that model in a deployed fashion, the monitoring of that model’s predictions, so like your trends over time are all like I’ll push through to the report as well. And then, the vision for where we’re going to take that is, as we watch your data change over time, we can start to build time series models for every chart, because there’s basically three questions every business has, which is like, what’s going to happen, why is it going to happen, and what can I do about it. And there’s all, like, require a bit of a time, like, view of things. You can’t just show someone the latest static view. If you really want to know how you’re trending, you want to know, like, how you’re trending and how the driving factors are trending. And then, ideally, if you’re working with the time variable, you want to know the lag between your driving factor and your outcome. So if we tie this back to like a marketing funnel, it’s like, how long between when a lead enters the funnel, till when deals convert. And then, if my – and what’s the relationship between the volumes there. And then, so I could look back and say, okay, what’s happened in the last six months to my top of funnel, and what do I expect that’s going to do to my revenue in the next six months, and we can show that to you.

JS: And so, is a user able to pull in external data, so I could imagine, can you pull in, like, daily stock market prices or unemployment rates from the Fed, GDP numbers, like, could you pull that in without having to pull that into your data and then load it into Akkio – is Akkio able to pull directly from these other sources now?

JR: We can pull from multiple sources and merge your data together. You have to have some join information that could be like a date or an ID or something. So we do make that possible. You do have to have a live pipeline to wherever the data source is. And so, you can do that via API, so it kind of like – the answer is it depends on the integration. But we are integrated with platforms like Snowflake, and Snowflake has like a pretty robust data marketplace as well, where you can get data feeds for various stuff, and then, of course, you can join those, and do the analysis on them. But it’s the right question, because interestingly, the way to make machine learning models or predictive outcomes more accurate, is to bring data you don’t have to the table. You can only make them slightly better by making your machine learning process better. But if you bring data that’s relevant to the outcome that you don’t have, you can make them massively better. And so, the long term game here, I think, is all in data augmentation. And, in fact, that’s what really separates like using an ML tool like this in your business from using like a GPT-4 to write generative content in your business. Because with a tool like that, it’s a level playing field, everyone has access, it’s like the internet basically. The ability to ask a question in Google made everyone more efficient, it’s even better with GPT-4, I would argue. But everyone has the same benefit once they figure it out, so it’s just an adoption race. And I think we can all see adoption is going to be incredibly fast.

Your business’s unique data, that’s your gold, that’s your competitive moat, that’s the thing you can build insights off of that no one else can. And so, this tool – the value of the tool is a little bit different, because it really lets you start to leverage your business data, and any data you can pull in that’s relevant to your business outcomes. So the more you can gather, typically, the better is. And so, in the longer term roadmap, we’re definitely interested in how we can help you augment your data with things that are relevant to your predictive outcome. And I think that probably starts with some user guidance, because you know things that impact your outcome, searching the world, called the world of data is like a lot of data out there, and it’s growing exponentially every day. Searching that for relevant data is kind of a hard task today, although, again with large language models that can parse context, that starts to get a little bit easier too, because you can start to narrow down the search.

JS: I recognize the goal of letting anybody go in and use it – say, I’m the head of HR at my company, I don’t know anything about machine learning, don’t know anything about code, but can I bring my data science team into the tool so that they can sort of push the boundaries, say, they could trying to do even something simple, they could pull data from an API, maybe implementing code within the tool to get to extract those data from an API?

JR: So yeah, typically, how that should work is the data engineering team would already be putting together topical views of the data for the relevant groups. So as the HR leader in a business, you would have access to some pre-groomed data feed that’s been pulled from various sources and joined together inside of your data warehouse, and you probably have some analysts today doing reporting off of it, telling you how you’re doing your job, and how you’re executing against your key initiatives. You can plug that dataset straight into Akkio, you don’t need to pull anyone in to do any tasks. Although, if you want to bring more or different data to the table, you may need to involve, depending on the technical nature of gathering that data, somebody from the data engineering team or an advanced analyst who is able to go pull it together. You can also join that together in platform if you need to, it’s pretty simple, you just say, this column and this column, and these two datasets have the matching IDs, go join the rows. We even do fuzzy match, so if you have like closest text fields, for example, and you could do that across multiple columns, so we try to make it easy to bring more data to the equation, because strategically, that’s important for us in the long haul.

But yeah, your data science team, if you ask them to do a task, is going to do it in a notebook, because they’re going to have more control over it, and they’re going to bring me back something that’s a little bit more complex to look at, but maybe a little bit more powerful. The place where you start to use us is the reality is the HR person doesn’t get a lot of attention from the data science team. They’re working on something that’s super high priority, and so, they don’t really have a solution today, and that’s what we’re building for, it’s something that makes it so that they can get in there. But yeah, there’s situations where we have the data engineering team, building the feeds in order to enable the business users to start to interact with, and look at the answers. But we’re typically today used by an analyst who’s already fairly comfortable working with the data, with a few of the business owners getting into it, and starting to understand what they can do with the platform.

JS: Right. I know there’s a lot going on, you’ve got a lot of stuff in the hopper, but where do you see the DataViz part going of the tool?

JR: I think that’s the most important piece, like, everything happening under the hood is kind of abstract, it doesn’t really help you understand what’s going on. And so, when we talk about ease of use, we’re talking really about two things, one is navigation, right, like, making sure that your workflow makes sense, but probably the more important thing is visualization of the data patterns. And that’s like communicating a complicated thing that’s going on in your data in a way that anybody can look at it and understand it, and that’s been a problem, I mean, that’s a problem forever. Right? Like, that’s very complex, and the litmus test on that, like I said is, you show someone a chart, and if they can understand it without asking a question, you’ve sort of passed it.

JS: Yeah.

JR: You’d be surprised at how often you can’t understand the chart without asking a question, if you really think about it, like, it’s pretty hard. And so, I think like continuing to iterate there is incredibly important. I suspect we’re putting some pieces in place where people can get feedback on some of the generated visualizations, when you request a chart. We actually use a language model to write the code to make that chart, and then, we make the chart. And we stick to some common chart types like scatter plots and bar charts and pie charts and stuff like that, you know, line charts. But as we get more complex there and start to be able to show more visualizations, we’re going to add, like, a thumbs up, thumbs down, like, did this make sense to me and try and keep iterating on displaying the information in a way that’s possible, let’s say. But, for sure, we live and die by that, because the minute somebody can’t understand what’s going on in the platform, we’re kind of like toast, and just disengaged.

JS: Well, it also sounds like if I am the head of HR, I’m probably communicating to other folks who are not in that data science team. Right? Like, I’m trying to pull this stuff together, I’m trying to make a case for either the folks that work for me, or the folks I work for, and trying to make these cases, and they may not be the data science folks. So I really do need to work on the communication.

JR: Yeah, I mean, that’s always been like a – it’s like a PowerPoint problem, right? Every board meeting I’ve ever sat in is a series of different charts you’re looking at, and you’re trying to figure out what’s going on, what’s going to happen, why this can happen, and what you can do about it. And so, yeah, and then, like, to the extent that you’ve ever been really impressed with someone’s work in one of these meetings, it’s always because the visualization and explanation is clear and makes sense and is concise. And to the extent that you’ve ever been frustrated or confused in one of these sessions, like, regardless of where you sit in an organization, it’s always because here’s a random chart I made that’s not clearly explained with some assumptions behind it that are also not clearly explained. And then, it’s trying to sell you a conclusion, which I don’t know if I should believe, like, that’s why I say there’s like lies, damn lies, and statistics, because you can really shape a story with data, and sometimes that shaping of the story, the shaping that went into the story is not clear. And so, yeah, visualization and also surfacing of assumptions were driving factors, as we try to call them, like, is I think very important.

The nice thing about being able to use machines to do that is you don’t get, like, you don’t get these mistakes, so to speak, it just says, here’s the pattern in that data, you could still filter it, you could still remove relevant information, all of that could still happen. But for the most part, it’s less prone to interpretation, like, mistaken interpretation. And it helps with shared understanding. And then, of course, you have to show it in a manner that’s understandable and simple.

JS: Right, yeah. What’s been going on over the last couple of months for you all, I mean, with DALL-E kind of first, a few months ago, and ChatGPT, like, what have you been seeing?

JR: Yeah, I call these, like, the Gen2 wave of AI tools, and there’s a few key things that are happening with them that make them so. I think the most important first principle of all of these is they’re self-serve, and if you think about it, most of the tooling that existed before these tools was not self-serve. So no individual user could really go get big value from it in their daily job. And so, that change, and then, like, general awareness amongst everybody that suddenly you can get more efficient with a self-serve tool is what has caused a massive influx of awareness, I think it’s happening across the board. And there’s a lot of noise associated with that, but also, a lot of interest. We’re getting massively more inbound, order of magnitude more inbound over the last three months than we did over the year before. And it’s all just people realizing, I could start to use this myself to do these jobs, it’ll make me more efficient, like, sort of a ground up, like, AI like thing where you get quick wins, you see the value immediately, these don’t have to be big projects anymore, they don’t have to take hundreds of thousands of dollars and tens of people to do in a business. And that’s making a big break breakthrough, I think, for everybody.

And the second important thing is, I think, the most successful businesses in adoption will be using these everywhere in there org. So we’ve just internally, we’re telling everyone that they have to be using these tools in their job, or they’re not a fit for our organization – you can’t be an AI native company, and not have everyone work AI natively, but especially as a startup where you’re resource-constrained, the ability to make people two times as efficient at their job, that’s massive, right? And that, like, even copilot, we didn’t talk about software engineering, but a lot of what we do is software engineering. And the ability to have a companion, like, cogeneration tool that makes you more efficient at writing software, that’s been a massive game changer for us in terms of execution speed. So really I think the point is, everyone’s waking up to – it doesn’t matter what your role is. If you’re not using one of these tools to make yourself more efficient at it, you’re probably working slower and less effectively than everyone else, or will be sometime soon. I think we’re still a little early in the adoption curve, but that’s happening fast.

JS: Yeah, right, not trying to get [inaudible 00:33:45] to say they’re in love with you, but actually trying to, like [inaudible 00:33:48] actually doing work, yeah.

JR: I mean, yeah, you can manipulate many of these generative tools to search, and then, get some awareness buzz in Twitter or something, that’s fine. But they actually are very efficient practical tools in businesses. And the trick there I’ve seen especially with most of the generative tools is figuring out how to prompt it effectively. I think there’s an entire skill set around that. And actually, when we build them into our user experience, which we do more and more places in our product, the trick there is how we prompt the NLP engine in the back end, given the user input. So if you ask to transform a date, we don’t just send over transform a date to GPT, we send a big structured prompt that will get us back exactly what we need to transform that data in our platform. And it’s taken us a while to iterate on that, and we do some other things too, like, we take the code that we get back to apply to the data table. And then, we send that back to the language model and ask it to describe what it does mostly. And then, we show you, like, here’s how it was interpreted, because a lot of times natural language is not the most complete way of specifying and ask, you know, people can be very loose in their natural language, like, I see this all the time. And so, when we give you back, oh, here’s how I’d interpret it, you’re asking, you’d be like, oh yeah, I see why I took it that way [inaudible 00:35:07]

JS: Right, yeah, that’s interesting.

JR: It sort of helps close the loop. And then, people learn really fast, like, Google, like, the closest analog I can come up with is googling something, like, there’s an art to searching things online. Some people are better at it than others, and you kind of learn it by searching and iterating through it until you figure out how to frame your query to get the response you’re looking for. Same thing to working with any of these tools, there’s a learning curve, but once you’re past it, you can get a lot of real value out of it. And I see some people trying it and saying it didn’t work. I’m like, well, you probably didn’t ask the question in the right way. It’s really, really the thing. And I can kind of maybe, you know, and so, like, yeah, lots of noise though.

JS: Yeah, it’s the computer’s fault, yeah.

JR: Well, I mean, to some extent, these tools are going to get better faster, and I think, before you know it, it’ll just work and people will be like, I don’t know why I didn’t see this in the beginning.

JS: Before we go, how can people sign up, use it, what are all the – I mean, everybody can check out the links, they are on the show notes, but what are the details of getting in and starting to use it?

JR: Yeah, I mean, we have an open platform, that’s kind of been our philosophy from day one. So anybody can make an account and get a free trial for a couple of weeks. Just click create account, we’ve got some onboarding in there that will walk you through it. We’ve got some demo videos. But you can just upload your data or connect it, if it’s in a live data source, and you can get right into manipulating it with natural language, creating visualizations and building ML models. And then, we do have a second motion where we help you. So if you’re a business, and you need some assistance, or want to understand how to best leverage it, in your particular area, we have solutions engineers who are set up to help do proof of values for those businesses. So if you’d like that, you can just request a demo, and we’ll get in touch, and we’ll help prove the value to you. We’re set up so that we win when you win, like, our pricing is on the lower end of prices. If you’re not getting value from machine learning, you’re probably not using it in the right way. This should ROI very, very quickly for your business, and we’ll help you get there.

JS: Cool. Well, I’m excited to see what happens. It’s an interesting time to say the least. So good luck with everything, excited to see how it plays out.

JR: Yeah, thanks. It’s exciting and looking forward to seeing where things go.

JS: Yeah. Thanks, Jon, for coming on the show. I appreciate it.

JR: Thanks for having me.

Thanks, everyone, for tuning in to this week’s episode of the show. I hope you enjoy that, hope you’ll check out Akkio and their services and maybe play around a little bit. If you would like to support the show, please rate or review the show on any of your favorite podcast providers. This show is now available on Zencastr, Stitcher, Google Play, iTunes, Spotify, tune in anywhere you get your podcast providers, of course, also directly on policyviz.com. So until next time, this has been the PolicyViz podcast. Thanks so much for listening.

A number of people help bring you the PolicyViz podcast. Music is provided by the NRIs. Audio editing is provided by Ken Skaggs. Design and promotion is created with assistance from Sharon Stotsky Ramirez. And each episode is transcribed by Jenny Transcription Services. If you’d like to help support the podcast, please share it and review it on iTunes, Stitcher, Spotify, YouTube, or wherever you get your podcasts. The PolicyViz podcast is ad free and supported by listeners. If you’d like to help support the show financially, please visit our PayPal page or our Patreon page at patreon.com/policyviz.