What Is Machine Learning?

Author: datascience@berkeley Staff

Contributing Author: Dr. Michael Tamir

Whether you know it or not, you’ve probably been taking advantage of the benefits of machine learning for years. Most of us would find it hard to go a full day without using at least one app or web service driven by machine learning. But what is machine learning?

Though the term machine learning has become increasingly common, many still don’t know exactly what it means and how it is applied. Nor do they understand the role of machine learning algorithms and datasets in data science. We will examine how machine learning is defined as a tool used by data scientists and take a bird’s-eye view of how it was developed, how it is currently being used, and what lies ahead as it continues to evolve. 

Definition of Machine Learning

The basic concept of machine learning in data science involves using statistical learning and optimization methods that let computers analyze datasets and identify patterns (view a visual of machine learning via R2D3). Machine learning techniques leverage data mining to identify historic trends to inform future models.

The typical supervised machine learning algorithm consists of (roughly) three components:

  1. A decision process: A recipe of calculations or other steps that takes in the data and returns a “guess” at the kind of pattern in the data your algorithm is looking to find.
  2. An error function: A method of measuring how good the guess was by comparing it to known examples (when they are available). Did the decision process get it right? If not, how do you quantify “how bad” the miss was?
  3. An updating or optimization process: Where the algorithm looks at the miss and then updates how the decision process comes to the final decision so that the next time the miss won’t be as great.

For example, if you’re building a movie recommender, your algorithm’s decision process might look at how similar a given movie is to other movies you’ve watched and come up with a weighting system for different features.

During the training process, the algorithm goes through the movies you have watched and weights different properties. Is it a sci-fi movie? Is it funny? The algorithm then tests out whether it ends up recommending movies that you (or people like you) actually watched. If it gets it right, the weights it used stay the same; if it gets a movie wrong, the weights that led to the wrong decision get turned down so it doesn’t make that kind of mistake again.

Since a machine learning algorithm updates autonomously, the analytical accuracy improves with each run as it teaches itself from the data it analyzes. This iterative nature of learning is both unique and valuable because it occurs without human intervention — providing the ability to uncover hidden insights without being specifically programmed to do so. 

What Are Some Machine Learning Methods?

Many machine learning models are defined by the presence or absence of human influence on raw data — whether a reward is offered, specific feedback is given or labels are used.

According to Nvidia.com, there are different machine learning models such as:

  • Supervised learning: The dataset being used has been pre-labeled and classified by users to allow the algorithm to see how accurate its performance is.
  • Unsupervised learning: The raw dataset being used is unlabeled and an algorithm identifies patterns and relationships within the data without help from users.
  • Semisupervised learning: The dataset contains structured and unstructured data, which guide the algorithm on its way to making independent conclusions. The combination of the two data types in one training dataset allows machine learning algorithms to learn to label unlabeled data.
  • Reinforcement learning: The dataset uses a “rewards/punishments” system, offering feedback to the algorithm to learn from its own experiences by trial and error.

Finally, there’s the concept of deep learning, which is a newer area of machine learning that automatically learns from datasets without introducing human rules or knowledge. This requires massive amounts of raw data for processing and the more data that is received, the more the predictive model improves.

Why Is Machine Learning Important?

Machine learning and data mining, a component of machine learning, are crucial tools in the process to glean insights from massive datasets held by companies and researchers today. There are two main reasons for this:

  • Scale of data: Companies are faced with massive volumes and varieties of data that need to be processed. Processing power is more efficient and readily available. Models that can be programmed to process data on their own, determine conclusions, and identify patterns are invaluable.
  • Unexpected findings: Since machine learning algorithms update autonomously, the analytical accuracy improves with each run as it teaches itself from the datasets it analyzes. This iterative nature of learning is unique and valuable because it occurs without human intervention, providing the ability to uncover hidden insights without being specifically programmed to do so.

Who Is Using Machine Learning?

Companies leveraging algorithms to sort through data and optimize business operations aren’t new. Leveraging algorithms extends not only to digital business models such as web services or apps but also to any company or industry where data can be gathered, according to SAS Insights, including the following:

  • Marketing and sales
  • Financial services
  • Brick-and-mortar retail
  • Healthcare
  • Transportation
  • Oil and gas
  • Government

Amazon, Facebook, Netflix, and, of course, Google have all been using machine learning algorithms to drive searches, recommendations, targeted advertising, and more for well over a decade. Uber Eats, for example, shared in a GeekWire piece that the company uses data mining and machine learning to estimate delivery times. 

Evolution of Machine Learning

Although advances in computing technologies have made machine learning more popular than ever, it’s not a new concept. The origins of machine learning date back to 1950, according to a Forbes article. Speculating on how one could tell if they had developed a truly integrated artificial intelligence (AI), Alan Turing created what is now referred to as the Turing test, which suggests that one way of testing for whether or not the AI is capable of understanding language is to see if it is able to fool a human into thinking they are speaking to another person.

In 1952, Arthur Samuel wrote the first learning program for IBM, this time involving a game of checkers. The work of many other machine learning pioneers followed, including Frank Rosenblatt’s design of the first neural network in 1957 and Gerald DeJong’s introduction of explanation-based learning in 1981.

In the 1990s, a major shift occurred in machine learning when the focus moved away from a knowledge-based approach to one driven by data. This was a critical decade in the field’s evolution, as scientists began creating computer programs that could analyze large datasets and learn in the process.

The 2000s were marked by unsupervised learning becoming widespread, eventually leading to the advent of deep learning and the ubiquity of machine learning as a practice.

Milestones in machine learning are marked by instances in which an algorithm is able to beat the performance of a human being, including Russian chess grandmaster Garry Kasparov’s defeat at the hands of IBM supercomputer Deep Blue in 1997 and, more recently, the 2016 victory of the Google DeepMind AI program AlphaGo over Lee Sedol playing Go, a game notorious for its massively large space of possibilities in game play.

Today, researchers are hard at work to expand on these achievements. As machine learning and artificial intelligence applications become more popular, they’re also becoming more accessible, moving from server-based systems to the cloud. At Google Next 2018, Google touted several new deep learning and machine learning capabilities, like Cloud AutoML, BigQuery ML, and more. During the past few years, Amazon, Microsoft, Baidu, and IBM have all unveiled machine learning platforms through open source projects and enterprise cloud services. Machine learning algorithms are here to stay, and they’re rapidly widening the parameters of what research and industry can accomplish.

What Is the Future of Machine Learning?

Machine learning algorithms are being used around the world in nearly every major sector, including business, government, finance, agriculture, transportation, cybersecurity, and marketing. Such rapid adoption across disparate industries is evidence of the value that machine learning (and, by extension, data science) creates. Armed with insights from vast datasets — which often occur in real time — organizations can operate more efficiently and gain a competitive edge.

The applications of machine learning and artificial intelligence extend beyond commerce and optimizing operations. Following its Jeopardy win, IBM applied the Watson algorithm to medical research literature, thereby “sending Watson to medical school.” More recently, precision medicine initiatives are breaking new ground using machine learning algorithms driven by massive artificial neural networks (i.e., “deep learning” algorithms) to detect subtle patterns in genetic structure and how one might respond to different medical treatments. Breakthroughs in how machine learning algorithms can be used to represent natural language have enabled a surge in new possibilities that include automated text translation, text summarization techniques, and sophisticated question and answering systems. Other advancements involve learning systems for automated robotics, self-flying drones, and the promise of industrialized self-driving cars.

The continued digitization of most every sector of society and industry means that an ever-growing volume of data will continue to be generated. The ability to gain insights from these vast datasets is one key to addressing an enormous array of issues — from identifying and treating diseases more effectively, to fighting cyber criminals, to helping organizations operate more effectively to boost the bottom line.

The universal capabilities that machine learning enables across so many sectors makes it an essential tool, and experts predict a bright future for its use. In fact, machine learning and artificial intelligence topped the list in Gartner’s Top 10 Strategic Technology Trends for 2017:

AI and machine learning … can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously… The combination of extensive parallel processing power, advanced algorithms and massive datasets to feed the algorithms has unleashed this new era.

Unsupervised learning techniques will eventually lead to generalized artificial intelligence applications, which will be able to teach themselves to do myriad tasks, according to Forbes, instead of the one that supervised learning algorithms are instructed to do. The future is bright — and full of machine learning techniques.

Machine Learning and datascience@berkeley

In recognition of machine learning’s critical role today and in the future, datascience@berkeley includes an in-depth focus on machine learning in its online Master of Information and Data Science (MIDS) curriculum.

The foundation course is Applied Machine Learning, which provides a broad introduction to the key ideas in machine learning. The emphasis is on intuition and practical examples rather than theoretical results, though some experience with probability, statistics, and linear algebra is important. Students learn how to apply powerful machine learning techniques to new problems, run evaluations and interpret results, and think about scaling up from thousands of data points to billions.

The advanced course, Machine Learning at Scale, builds on and goes beyond the collect-and-analyze phase of big data by focusing on how machine learning algorithms can be rewritten and extended to scale to work on petabytes of data, both structured and unstructured, to generate sophisticated models used for real-time predictions.

In the Natural Language Processing with Deep Learning course, students learn how-to skills using cutting-edge distributed computation and machine learning systems such as Spark. They are trained to code up their own implementations of large-scale projects, like Google’s original PageRank algorithm, and discover how to use modern deep learning techniques to train text-understanding algorithms.

Learn more about the datascience@berkeley curriculum.

Share this on social media:

Facebook |         LinkedIn |         Twitter

Citation: 

datascience@berkeley, the online Master of Information and Data Science from UC Berkeley

Go to Source