## Machine Learning

Machine learning (ML) is a category of algorithm that allows software applications to become more accurate in predicting outcomes without being explicitly programmed. The basic premise of machine learning is to build algorithms that can receive input data and use statistical analysis to predict an output while updating outputs as new data becomes available.

Basically, machine learning is the way that makes your smartphones “smart”, You often see in big corporate conventions they use Machine Learning and AI interchangeably even though these two topics are very different but connect in many ways. The goal of AI or Artificial Intelligence is to make a machine or robot that can mimic the capabilities of a human mind, which of course includes learning abilities but also understanding texts and languages (Natural Language Processing), ability to comprehend images and videos (Computer Vision) and other things a human can do.

Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: “A computer program is said to learn from experience *E* with respect to some class of tasks *T* and performance measure *P* if its performance at tasks in *T*, as measured by *P*, improves with experience *E*.”

In other words, Machine Learning is a **part of AI** that deals only with the learning abilities of a machine. All the algorithms and techniques that are dedicated to the task of teaching a machine to learn based on past experiences like Linear Regression, Support Vector Machines, Logistic Regression, Gradient Descent, etc all come under the field of Machine Learning.

But when these companies like Google or Apple wants to sell something, they use the term Artificial Intelligence or AI and when they talk about their projects to engineers and other experts, they use the term Machine Learning and Neural Networks.

Machine Learning can be split into three categories: Supervised Learning, Unsupervised Learning and Reinforcement Learning.

**Supervised Learning** is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from *labelled training data* consisting of a set of *training examples*. In supervised learning, each example is a *pair* consisting of an input object (typically a vector) and the desired output value (also called the *supervisory signal*). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a “reasonable” way.

**Unsupervised learning** is a branch of machine learning that learns from test data that has not been labelled, classified or categorized. Instead of responding to feedback, unsupervised learning identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data.

**Reinforcement learning** (**RL**) is an area of machine learning concerned with how software agents ought to take *actions* in an *environment* so as to maximize some notion of cumulative *reward*. The problem, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In the operations research and control literature, reinforcement learning is called *approximate dynamic programming,* or *neuro-dynamic programming.* The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality. In machine learning, the environment is typically formulated as a Markov Decision Process (MDP), as many reinforcement learning algorithms for this context utilize dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter does not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible.

We’ll be looking into algorithms for each of these categories in separate posts (due to SEO reasons)

If you paid attention to this post, I wrote something about Neural Network

## What is a Neural Network?

**Artificial neural networks** (**ANN**) or **Neural Network (NN)** are computing systems vaguely inspired by the biological neural networks that constitute animal brains. The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs. Such systems “learn” to perform tasks by considering examples, generally without being programmed with any task-specific rules. For example, an image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labelled as “cat” or “no cat” and using the results to identify cats in other images. They do this without any prior knowledge about cats, for example, that they have fur, tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics from the learning material that they process.

Naturally, a question will arise in all Machine Learning beginners that if we have ML algorithms that can do almost all work with great accuracy, what is the need of Neural Networks?

One of my favorite Youtuber 3Blue1Brown has made a video about this

You can watch this video if you have 20 mins to spare, but I’ll try to summarize this video as accurate and as simple as possible.

Basically, a Neural Network, unlike a traditional Machine Learning algorithm, resembles more to the actual brain and the cells in a brain. And like the brain, a neural network is capable of learning different tasks. But if we use Machine Learning algorithms then we have to develop different algorithms for different tasks.

A neuron in a neural network is known as a Perceptron, a Perceptron just like any function takes some input and gives some output. And in machine learning, a perceptron is used in supervised learning both regression and classification.

In statistical modelling, **regression** analysis is a set of statistical processes for estimating the relationships among variables. It includes many techniques for modelling and analyzing several variables when the focus is on the relationship between a dependent variable and one or more independent variables.

**classification** is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known.

We will learn about Perceptrons in the next post, in that post, we will also discuss a way with which we can make basic logic gates (AND, OR) and an optimization algorithm called Gradient Descent.

If you like my blog, then make sure that you follow this blog and also leave a like and share this post with your friends and colleagues. If you have any query then please leave a comment, I’ll reply to them as soon as possible.

Thank You.