The Next Big Thing in AI/ML is…

Author: William Vorhies

Summary:  AI/ML itself is the next big thing for many fields if you’re on the outside looking in.  But if you’re a data scientist it’s possible to see those advancements that will propel AI/ML to its next phase of utility.

 

“The Next Big Thing in AI/ML is…” as the lead to an article is probably the most overused trope since “once upon a time”.  Seriously, just how many ‘next big things’ can there be?  Is your incredulity not stretched every time you read that?

It’s tempting to say that writers starting an article in this way should be flogged …except that yours truly did recently start one with “the next most IMPORTANT thing in AI/ML…”  Well that’s clearly different isn’t it – almost.

If you label something ‘next big thing’ it’s evident you have a strong opinion – or your marketing department has no imagination. 

First of all, if you’re on the outside of AI/ML looking in, AI/ML clearly is the next big thing.  Most next-big-thing articles are actually in this category, explaining how AI/ML can enhance everything from your dating life to your investment portfolio.

But if you’re fortunate enough to be on the inside as our readers are then you know that the future of AI/ML is developing along many different paths and some of those should be more important than others.  Some are technical, some are applications, and some are even social or philosophical.  So how to tell what the next big thing is or at least what the rankings should be.

To give this a little structure we need some guardrails.  If it’s ‘next’ then the time horizon needs to be reasonably short and the technical details need to be currently attainable.  This leaves out all those Quora answers about transhumanism, AGI, nanorobotics, genetic predictions, and neuromorphic hardware.

Also if it’s ‘next’ that means it’s not yet fully realized.  Alexa, facial recognition on my iPhone, my Roomba, and the chatbot at my bank are all wonderful, but those AI/ML application technologies are fully realized.  Been there, done that.

So after spending several hours consulting with the oracle of Google I would offer that there is an answer to this question but it depends on where you are standing when you look into the future.  Here are four ways of looking at this that give somewhat different answers.

 

Things that lead to the greatest good for the greatest number of people

This is really the question of what applications we have not yet cracked that bring the most good to the most people.  The answer is healthcare both in the way you experience it currently in your doctor’s office and what will shortly be possible applying AI/ML to personalized medicine and drug discovery.

Despite all the hype about AI/ML particularly in imaging being able to discern things in MRIs and lab tests that clinicians can’t spot, you might be surprised to know that only about 1% of the roughly 6,000 US hospitals have data science programs.

The causes of slow adoption are both financial and cultural with culture probably being the more important.  Evidence is that AI/ML developers are not paying close enough attention to how they are disrupting the workflow of doctors and clinicians that results in slow adoption.

Yes opportunities abound in:

  1. Drug discovery and innovation, both personalized and precision medicine.

Of all the AI/ML opportunities in healthcare this one is actually furthest along largely because it’s big pharma that pays, not the insurers.

  1. The business of healthcare.

The operational world of the clinician may be unique but at a business level hospitals and healthcare organizations share some marked similarities with the commercial world.

  1. Patient intake and referral.

Determining whether and when a patient gets to see a doctor is a major control point for constraining healthcare costs, especially in single payer systems.  The initial consult with the doctor is also a major time sink that could be at least partially automated.

  1. Clinical applications – what happens between clinician and patient. AKA the AI/ML augmented physician.

If you want AI/ML to succeed in improving healthcare it needs to get into the space between the doctor and the patient.  Several major subsets of this opportunity need to be understood separately.

4.1 Automated / Semi-automated interpretation of medical images.

4.2 Enabling more accurate identification of disease subtypes – precision medicine.

4.3 AI/ML driven triage and prevention including developing protocols that give better outcomes, prevent bad outcomes, and that focus on anomalies and preventable harm.

 

Things that lead to cost savings, greater efficiency, and relieving mankind of low value repetitive work

You may immediately think I mean robots and I do, but probably not in the way you’re imagining.  This is actually about Robotic (or Digital) Process Automation (RPA).

RPA as a rules-driven, non-AI/ML technique has actually been around for some time.  My feathers get ruffled when researchers want to say that current iterations of RPA are in fact an implementation of AI/ML.

This can be really misleading.  Based on a McKinsey study we reported that 47% of companies had at least one AI/ML implementation in place.  Looking back at the data and the dominance of RPA as the most widely reported instance makes us think that the number is probably significantly over stated, maybe by as much as half.

But as RPA progresses it does increasingly have elements of embedded AI, just not the fancy or customized versions we data scientists are used to thinking about.  For example, RPA platforms are increasingly embedding chatbots for both text and voice communication.  Chatbots based on deep learning are a fully mature technology.

Also in there, increasingly you’ll find image recognition tech based on CNNs.  It’s hidden in the document or screen recognition features.  For example, when you direct your RPA engine to schedule an appointment on someone else’s calendar or make an airline reservation it’s faced with recognizing active elements on unknown screens that can be as simple as identifying what button to press to enter a command.  Embedded AI/ML computer vision tech can do this.

Most automated processes today are more in the nature of tasks rather than large scale processes as we think of them.  That capability will continue to expand and if the AI/ML is completely integrated and essentially hidden from the user, it still is, after all, AI/ML.

 

Things that will move forward the technical capabilities of AI/ML the most and the fastest.

Machine learning hasn’t had a breakthrough technique since about 2016, and AI deep neural nets have also been reduced to incremental improvements since then.  I have two candidates in this column:

  1. Deep Reinforcement Learning

DRL is the current mashup of reinforcement learning with deep neural nets.  The magic of reinforcement learning is that it can start with essentially a blank slate and develop better than human performance, especially since it does not require training data in the classical sense.  In essence, reinforcement learning is a brute force iterative technique not much different from the evolutionary algorithms of ML.  The addition of discriminators based on deep neural nets promises a breakthrough in performance.

  1. Deeply Inclusive or Contextually Sensitive AI

The scope of what we might call awareness in our chatbots is quite limited.  Ask it for the weather, the game score, or to play your favorite song and its advanced information retrieval algos will comply.  But if we want Alexa to string together things it already knows about us to make helpful suggestions in our lives, that doesn’t yet exist.

For example, why shouldn’t Alexa remember that it’s my mother’s birthday and suggest a gift based on its knowledge of her likes and my interactions with her.  That sort of stringing together the contextual elements of a ‘story’ will make our AI based virtual assistants much more valuable.

 

Things that mitigate resistance to AI/ML so that society can readily accept this new capability

How much we are bothered by or resistant to new AI/ML tech seems to be based on how you ask the question.

If you ask the public if they like the convenience of internet shopping, the recommendations that are tailored specifically to them, or the actual reduction in cost that AI/ML enabled targeting has created their response is overwhelmingly positive.

  • 91% prefer brands that provide personalized offers / recommendations.
  • 83% willing to passively share data in exchange for personalized experiences.
  • 74% willing to actively share data in return for personalized experiences.

Ask the same folks about privacy or let them hear about how the evil technology is biased against some ethnic or socio-economic group and their attitude changes dramatically.  The media was quick to learn that publishing a thumb-sucker about a data breach gets our clicks faster than a funny cat video.

Although our most accurate models reach their classifications with a level of complexity that makes them impossible to explain, we are in fact having major breakthroughs in techniques for transparency and explainability.

Beyond simple transparency however, I would argue that our ability to show and explain causality is the next close-in frontier.  The techniques are available but to this point there hasn’t been enough demand to add this level of complexity to our algos.  Now that the societal pressure has become great, causality may be the next most important thing in allowing the public to embrace the benefits of AI/ML.

 

 

Other articles by Bill Vorhies

 

About the author:  Bill is Contributing Editor for Data Science Central.  Bill is also President & Chief Data Scientist at Data-Magnum and has practiced as a data scientist since 2001.  His articles have been read more than 2 million times.

He can be reached at:

Bill@DataScienceCentral.com or Bill@Data-Magnum.com

Go to Source