Popular media and hype cycles are leading us to expect radical changes and perfect results from artificial intelligence. But you don’t read perfectly, so why should your AI? Let’s explore when “good” is good enough for AI.
You don’t read perfectly, so why should your AI?
We have high expectations of AI. If it doesn’t spit out a flawless output, it’s not good enough. But are we being unreasonable in our expectations?
Sure, it’s not great when a computer mistakes a 3D-printed turtle for a rifle. And we get that you might be nervous about the defensive driving skills of your self-driving car.
But when the stakes are lower—say extracting the text from an article and assessing sentiment—how perfect does an AI truly need to be?
What does “good enough for AI” mean?
When dealing with data, we’re also dealing with imprecision and incompleteness. There’s a point where we have to agree to compromise so that things actually get done. Analysis paralysis is real, and it’s expensive.
Rather than aiming for perfection, we should be aiming for good enough. Obviously, tolerances for what’s “good enough” will vary across domains and projects. “Good enough” for a flight path is probably a bit more stringent than “good enough” for reading house numbers.
At Lexalytics, an InMoment company, we work with text analytics. We extract text data and analyze it to glean sentiment at various levels—the word level, the sentence level, the discourse level, and the cross-document level.
Getting it right matters, but does getting it perfect? Often the gap between right and perfect is small enough to be of vanishing importance.
Perfection is a measure of time and resources
Say our extracted text contains a typo, or it’s pulled in some extraneous meta-data. Maybe a whole sentence is missing.
As a human reader, you’ll still get the gist of what’s going on. As humans, we’re used to creating “good enough” understanding from “good enough” data. We skim, we condense, and we weigh information based on our goals. A painstaking, comprehensive reading takes time and resources, so we assign them accordingly.
The same is true of AI. We could spend the time and resources on solving the problem perfectly rather than just solving it. But frankly, we think that usually those resources could be better used elsewhere—like for crunching more data.
Mo’ data means mo’ problems (for us to solve)
AI feeds on data. The more data it has access to, the better and more precise it gets. AI exploding the way it is has a lot to do with the fact that we live in the data age.
People conduct billions of searches daily and upload billions of social media posts, millions of news articles, and countless media within the same window. It’s a big data wonderland out there.
What do you think is better for your AI? Being perfectly trained on a carefully curated subset of data, or being able to graze across limitless pastures?
Making sure that your AI has perfect input or flawless output isn’t the goal. The goal is to be good enough to solve your problem. Let’s focus on getting it right rather than getting it perfect.
And besides, with enough data input, “good enough for AI” will eventually approach perfect anyway.
Making sure that your AI has perfect input or flawless output isn’t the goal. The goal is to be good enough to solve your problem. Let’s focus on getting it right, rather than getting it perfect.
And besides, with enough data input, “good enough for AI” will eventually approach perfect anyway.