“Machine Intelligence is the last invention that humanity will ever need to make.” – Nick Bostrom

How dependent are you on some of the technologies?

  1. Do you have a face lock on your smartphone? Or Your smartphone unlocks when it sees your face.
  2. Do you use a speech-to-text feature on your phone? Or do you say something and your phone gets it typed?
  3. Are you active on social media? Then you might be getting “People You May Know” notifications. Or you shop online and get the list of “recommended products” which are generally(not always) according to your choice.
  4. Do you say “OK Google” on your phone and ask it to make calls?
  5. Do you get excellent services from your bank, gas agencies, and e-commerce?

If any of your answers in a “YES”, you are surrounded by Machine Learning in one or the other forms. And I’m sure most of your answers are a Yes.

Machine learning surrounds us so well that we don’t even realize its presence most of the time. 

This is the reason that there are new job openings daily on job portals. Currently, there are more than 14000 jobs available for Machine Learning Engineers on LinkedIn. 

You might be thinking about how interesting it would be to learn Machine Learning, the ever-evolving technology that goes with us without making us realize its presence. Moreover, the past four years have seen an almost 75% rise in jobs for Machine Learning and AI. Machine Learning jobs across the globe are expected to reach worth USD 31 billion by 2024, which implies an annual growth rate of 42.8% over a six-year period. 

Did You Know?

  • Almost 75% of Netflix users prefer movies recommended to them by the machine learning algorithms of the company.
  • The most popular ML use cases include reducing company costs(38%), generating customer insights and intelligence(37%), and enhancing the experiences of the customers.

Let us now look at the history of Machine Learning, what it actually means, and the way it evolved.

What is Machine Learning?

Machine Learning is a subset of Artificial Intelligence, which is a process of analyzing data to help a computer program optimize its functions as it learns from huge amounts of data. It is such a form of AI that enables computers to learn and improve as they are introduced to scenarios that are data-based. 

Machine Learning is regarded as one of the most notable trends in technology. Machine Learning algorithms are based on mathematical models that enable a machine/computer to recognize and learn directly from patterns hidden in the data and perform complex tasks sensibly, rather than following the rules that are pre-programmed.

Machine Learning is widely used today in fraud detection, recommendation engines, web search, spam filters, credit scoring, ad placements, drug prescription and design, and many other applications. 

Let us now look at the history of machine learning and some of its most important milestones.

History of Machine Learning

  • 18th Century marks the development of statistical models

Some of the vital concepts came in the 18th century from probability and statistics. The well-known Bayes Theorem, which still is the basic concept of many modern algorithms, was derived by an English Mathematician Thomas Bayes who introduced a mathematical theorem of probability, in 1763.

  • 1950 marks The Turing Test

Another English mathematician whose papers were filled with the ideas of machine learning titled “Can Machines Think” was Alan Turing, in the 1940s. The Turing Test was suggested by him in 1950 to check the machine if it responds itself and learns to distinguish between the right moves and the wrong ones.

  • 1956 marks The Dartmouth Workshop

The term Artificial Intelligence was coined in 1956 during the Dartmouth workshop, which lasted six to eight weeks and was attended by scientists and mathematicians, including John McCarthy, Nathaniel Rochester, Marvin Minsky, and Claude Shannon. This workshop is widely considered to be the foundational event of Artificial Intelligence as a field.

  • 1957 marks The Perceptron

The Perceptron was an attempt to create a neural network by using a rotary resistor, which was driven by an electric motor, developed by the famous American psychologist Frank Rosenblatt. The machine was capable of taking input and creating a desirable output.

  • 1967 marks Nearest Neighbor Algorithm

This rule has appeared in different research papers, including a paper from T.Cover and P.Hart in 1967, which is notable in pattern recognition. This algorithm was designed to map a route for traveling by starting from a random point and ensuring that you visit all the cities on a short tour.

  • 1973 marks The Lighthill report and the AI winter

This report presented a negative forecast for the development of AI research. It is called the AI winter because the British government reduced the funding and interest in AI research processes.

  • Stanford Cart- 1979

A robot named Cart was designed by the students of Stanford University, which was radio-linked to a large mainframe computer, was capable of navigating obstacles in a room on its own.

  • EBL or Explanation Based Learning- 1981

This concept, introduced by Gerald Dejong, includes analyzing the data and creating a general rule to follow by discarding unnecessary data.

  • NetTalk in 1985

NetTalk, invented by Francis Crick Professor Terry Sejnowski, is a program that learns to pronounce written English text in which input is a shown text and matching phonetic transcriptions to be compared. 

  • Parallel Distributed Processing and neural network models in 1986

Parallel Distributed Processing involved the advancement in the use of neural network models for machine learning, published by David Rumelhart and James McClelland.

  • Playing Backgammon in 1992

A program created by researcher Gerald Tesauro, based on an artificial neural network, was able to play backgammon just like top human players.

  • Deep Blue in 1997

IBM developed the first computer with a chess-playing system, which was able to beat a reigning world chess champion. To select the best possible move Deep Blue used its computing power.

  • Deep Learning in 2006

The term ‘deep learning’ was created by Geoffrey Hinton to explain new algorithms that can guide computers to differentiate objects and text in images and videos.

  • Kinect in 2010

A motion-sensing input device named Kinect was developed by Microsoft that is capable of tracking 20 human characteristics at the rate of 30 times per second. It let people interact with the machine/computer via motion and gestures.

  • Watson and Google Brain in 2011

Watson, developed by IBM, won the game of the US quiz show named Jeopardy against two of its champion competitors. Google Brain was developed in the same year, which was able to discover and categorize objects in the same way as a cat.

  • ImageNet Classification and computer vision-2012

Geoffrey Hinton, Alex Krizhevsky, and Ilya Sutskever collectively published an influential paper that described a model capable of reducing the error rate significantly in image recognition systems. In the same year, Google developed a machine learning algorithm that was able to browse YouTube videos autonomously to identify videos containing cats.

  • DeepFace in 2014

DeepFace is a software algorithm that can identify and verify people on photos with the accuracy of a human.

  • Amazon Machine Learning in 2015

Andy Jassy of AWS launched a machine learning algorithm to manage services that analyze users’ previous data to identify patterns and deploy predictive models. In 2015 itself, the Distributed Machine Learning Toolkit was created by Microsoft.

  • AlphaGo in 2016

The researchers of Google DeepMind created AlphaGo to play the ancient Chinese game Go. Surprisingly, it won four out of five matches against the world’s top Go player for a decade, Lee Sedol.

  • Libratus and DeepStack in 2017

Libratus was created by researchers at Carnegie Mellon University that defeated the top four players at No-Limit Texas Hold ’em after playing for 20 days in 2017. A similar success was reported by the researchers at the University of Alberta.

Bottom Line

With ever-evolving techniques, Machine learning has several advantages when applied to businesses. It helps in making quicker and smarter decisions, eliminates bias from human decision making, and better usage of structured and unstructured data. 

To make a career in this innovative domain, you should be smarter and go with an online training course to get certified. Apart from getting self-paced learning, choice in mode of learning, doubt sessions are conducted by industry experts to ensure complete preparation. All you need is to get yourself enrolled.