The Two (Conflicting) Definitions of AI

Author: William Vorhies

Summary:  There are two definitions currently in use for AI, the popular definition and the data science definition and they conflict in fundamental ways.  If you’re going to explain or recommend AI to a non-data scientist, it’s important to understand the difference.

 

For a profession as concerned with accuracy as we are, we do a really poor job at naming things, or at least being consistent in the naming.  “Big Data” – totally misleading (since it incorporates velocity and variety in addition to volume).  How many times have you had to correct someone on that?

And look back at all the things we’ve called ourselves since the late 90’s.  These names don’t describe different outcomes or even really different techniques.  We’re still finding the signal in the data with supervised and unsupervised machine learning.

So now we have Artificial Intelligence (AI) for which there are at least two competing definitions, the popular one and the one understood by data scientists.  And that doesn’t even account for the dozens of Venn diagrams trying to describe which is a subset of what and all basically in conflict.

I’m sure by now you’ve heard the old joke.  What’s the definition of AI?

When you’re talking to a customer it’s AI.

When you’re talking to a VC it’s machine learning.

When you’re talking to a data scientist it’s statistics.

It would be even funnier if it weren’t true, but it is.

So it’s a worthwhile conversation to go directly at these two definitions and see where they conflict, and where if anywhere they converge.

 

The Popular Definition

This definition got underway 12 or 18 months ago and seems to have unstoppable momentum.  In my opinion that’s too bad since it’s misleading in many respects.  Gathered from a variety of sources and distilled here the popular definition of AI is:

Anything that makes a decision or takes an action that a human used to take, or helps a human make a decision or take an action.

The main problem with this is that it describes everything we do in data science including every technique of machine learning we’ve been using since the 90s.

As I gathered up different versions of this to distill for you here it became apparent that there are four different groups promoting this meme.

  • AI Researchers: They’re getting all the press and they want to claim ‘machine learning’ as something unique to AI.
  • The Popular Press: They’re just confused and can’t tell the difference.
  • Customers: Who increasingly ask ‘give me some of that AI’.
  • Platform and Analytics Vendors: If customers want AI then we’ll just call everything AI and everyone will be happy.

 

The Data Scientist’s Definition

Those of us professionally involved in all these techniques know that a set of new or expanded techniques evolved over the last ten years.  These included deep neural nets and reinforcement learning.

These aren’t radically new techniques since they grew out of neural nets that had been in our toolbox for a long time but blew up on the steroids of MPP (massive parallel processing brought by NoSQL Hadoop), GPUs, and vastly expanded cloud compute.

When you looked at these from the perspective of the AI founders like Turing, Goertzel, and Nilsson you could see these newly expanded capabilities as the eyes, ears, mouth, hands, and cognitive ability that started to add up to their vision of what artificial intelligence was supposed to be able to do.

 

Data scientists understand that the definition of AI as we practice it today is really a collection of the six unique techniques above, some more advanced toward commercial readiness than others.

 

Is There Any Common Ground

It’s narrow, but there is some common ground between these two definitions.  That’s primarily in the backstory for AI.  The popular press has mostly represented that AI is something brand new but the correct way to look at this is as an evolution over time.

 

I think we all understand that we stand on the shoulders of those who came before.  Even as far back as the 90’s we were building hand crafted decision trees that we called expert systems to take the place of human decision making in complex situations.

Once you understand that the popular definition wants to include everything that makes a decision, then it’s easy to see the progression through machine learning and Big Data into deep learning.

One place where the casual reader needs to be careful though is in understanding what elements of AI are commercially ready.  Among the six techniques or technologies that make up AI, only CNNs and RNN/LSTMs for image, video, text, and speech are at commercially acceptable performance levels.

What you may need to explain to your executive sponsors is that these six ‘true’ AI methods are still the bleeding edge of our capabilities.  Projects based on these are high cost, high effort, and higher risk. 

The conclusion ought to be that there are many business solutions that can be based on machine learning without involving true AI methods.  As more third party vendors create industry or process specific solutions using these new techniques this risk will become less, but that’s not today.

For the rest of us, the conflict of definitions remains.  When someone asks you about AI, you’re still going to need to ask ‘what do you mean by that’.

 

 

Other articles by Bill Vorhies.

 

About the author:  Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist since 2001.  He can be reached at:

Bill@DataScienceCentral.com or Bill@Data-Magnum.com

 

Go to Source