

Updated · Jun 07, 2023
Darina is a proud Ravenclaw and a fan of Stephen King. She enjoys being a part of an awesome team of... | See full bio
In 2022, computers can not only see, but they can read and write of their own accord.
Well, let’s have a look at the modern horror story we actually live in.
For example, how would you react if someone told you that soon 30% of jobs will be replaced by automation? It’s outrageous, isn’t it?
And what does that have to do with machine learning algorithms?
Thankfully, there’s a light at the end of the tunnel. Let me walk you through it.
In 2022, computers can:
The list goes on, of course.
Every day we do web searches, visit websites and social media. And we never ask ourselves the fundamental question:
Here we go!
The achievements of technology are raising questions about the future of humanity.
Maybe these facts will give us some insight:
In 2022 we can actually own a robot at home.
You can have a fluent conversation with QTrobot or Tapia. They are called social robots for a reason.
They remember the names, faces, and voices of your friends and family members (which is not creepy at all!), they can babysit your child (oh yes!), and if you have an accident at home they’ll call 911 for you. That last one can be especially useful when there’s no one else around. But we’ll get to that later.
Nowadays algorithms can “teach themselves” languages, and even translate spoken English to written Chinese simultaneously with the fluency of the average native Chinese speaker. Sooner or later, studying foreign languages will inevitably become obsolete.
And how about this:
Our smartphones are literally spying on us… I am sure you know exactly what I’m talking about! Imagine - at the office lunch you mention (verbally!) you want to start watching Lucifer. Back at your desk, you open Pinterest or Facebook on your phone, and there he is - the Devil himself… (Yes, Tom Ellis is dreamy, but that’s not the point!)
Recommendation systems are all around us. If you enter a search for “Lego”, the related images that pop out and are classified as Lego, had been recognized as such by an AI. In other words, they weren’t annotated manually as Lego blocks by a human…
The algorithm had taught itself what this is by looking at millions of images.
Goosebumps!
All of those capabilities and SO much more, are already getting utilized by companies.
The implications here are:
First, computers already possess the power to create AI algorithms and teach themselves how to do…well, almost anything.
Teach themselves, guys! Think about all the robot workers in the future. They will learn and perform tasks WAY faster than human workers.
And second - I know what you are thinking - OMG, humanity is so doomed!
A lot of people react this way.
Many authors over the last century have written about a future where robots dominate humans. Artificial intelligence is flourishing, robots will rule the world and feed on humans. Singularity is nigh.
OK, this is a good place to stop.
Now that we’ve got that out of our system, let’s actually look at what’s factually true.
We need some context first.
15,000 years ago one of the world’s most beloved games was invented. Somewhere between the 12th and 14th century, that game became known as chess.
It has 10 to the power of 40 possible outcomes (that’s 1 with 40 zeros at the end).
In 2017, Google’s AlphaZero algorithm used machine learning to teach itself to play AND win the game.
The whole process, from introducing the game to the algorithm, until it won its first game against Stockfish - one of the strongest chess engines in the world, took:
(brace yourself!)
4 hours.
Ouch!
Yes, we are on the verge of a machine learning revolution.
Looking back, this is not the first disruption of this kind. The industrial revolution at the end of the 19th and the beginning of the 20th century did cause social disruption as well, but eventually, humanity and machines achieved an equilibrium.
Yes, things are changing, and that is actually a good thing!
Machine learning software possesses the power to look at a problem with fresh eyes and navigate through unknown environments.
So, as we are about to see, it’s not a horror story after all.
More like a technological miracle.
Now:
For starters, what is machine learning by definition?
Basically, a machine is programmed to teach itself how to produce a program and create solutions. Machine learning always produces the most accurate numbers (and, if needed - predictions) possible.
Think of a technology that can solve a wide range of completely different problems.
And that’s the beauty of it!
The system’s main purpose is to classify. This is also called computer vision. It will learn on its own to make distinctions. And the number of different problems in the world that can be reduced to the seemingly simple task of classification, is absolutely mindboggling.
Just imagine the ability to classify between:
That’s exactly why experts in many areas will become obsolete. You don’t need to be an expert to create a code that will perform such tasks. The guys that wrote the English to Chinese simultaneous translation program did not speak a word of Chinese.
The algorithm will teach itself how to be an expert.
And yes, it’s important to learn about them and get to know them… the way we got to know computers in the beginning.
We are good with computers now. So good that we tend to anthropomorphize them (or maybe that’s just me?).
It seems that this is the time to ask ourselves:
What will happen to all those people, who will eventually lose their jobs to AI and machine learning programs?
Have you heard of a little thing called Universal Basic Income?
Here it goes:
In the future, citizens will have income that doesn’t involve them doing any work. The money will come from the insane efficiency that automation will provide and the savings that come from it.
Either this or - a slightly more realistic scenario - many new types of jobs will emerge. At the end of the 19th century about 50% of the population in the US was involved in agriculture. Now, thanks to powerful machines, less than 2% are farmers and yet people are employed.
Now, what can machine learning be used for?
Machine learning can be used to deduce new facts from a database.
Let’s see some of the areas where machine learning will make a great difference:
Machine learning is just a tool, and it will remain one for the foreseeable future.
So, no need to worry. Sit back and relax.
Now that we’ve seen what machine learning is, let’s ask the next question:
So, after we established how important and beneficial for our future machine learning is, let’s have a closer look at the algorithms that make the magic happen.
A great way to explain machine learning algorithms is to compare them to traditional programming.
In traditional programming, the programmer works in a team with an expert in the field, for which the software is being developed. The more complex the task - the longer the code and the more difficult its writing will be.
Machine learning algorithms work quite differently. The algorithm receives a dataset for input - and an optional one for the output. It then analyzes it (or them) and works out the process that has to take place for a useful result to occur. Today, this is a job reserved for a human programmer. In the future, that will change as well.
There are 4 different types of machine learning algorithms.
Here they are:
The input data in supervised learning algorithms is labeled, and the output is known and accurate. In order to use this class of algorithms, you’d need a large amount of labeled data. And that may not always be an easy task.
Supervised algorithms fall into two categories - regression and classification. Each examines different sets of data.
Regression algorithms are the ones that make predictions and forecasts. Among others, these include weather forecasts, population growth, and life expectancy estimates, market forecasts.
Classification algorithms are used for diagnostics, identity fraud detection, customer retention, and as the name suggests - image classification.
It occurs when the input data is not labeled. They organize the data into structures of clusters. Thus any input data is immediately ready for analysis.
Since data is not labeled, there is no way of evaluating the accuracy of the outcome. That said, it is not accuracy that unsupervised algorithms are designed to pursue. The clusters that the algorithm creates are in no way familiar to the program. So the idea is to input data, analyze it, and group it into clusters.
Just like the supervised algorithms, their unsupervised cousins are divided into 2 categories - dimensionality reduction and clustering.
Clustering algorithms themselves are obviously a part of all this. It’s useful to group data into categories, so you don’t have to deal with every piece on its own. These algorithms are used above all for customer segmentation and targeted marketing.
Dimensionality reduction algorithms are used for structure discovery, big data visualization, feature elicitation, and meaningful compression. If clustering is one side of the coin, dimensionality reduction would be the other. By grouping data into clusters, the algorithms inevitably reduce the number of meaningful variables (dimensions) that describe the set of data.
Now, there is a class of machine learning algorithms that combines the previous 2 classes:
It stands between supervised with labeled data, and unsupervised algorithms with unlabeled data.
Semi-supervised algorithms use a small amount of labeled data and a large amount of unlabeled data. This can lead to an improvement in learning accuracy.
It’s also a huge relief in terms of data gathering since it takes a good deal of resources to generate labeled data.
Unlike the 3 previous types, reinforcement algorithms choose an action based on a data set. Then they evaluate the outcome and change the strategy if needed.
In reinforcement algorithms, you create a network and a loop of actions, and that’s it. Without creating a database, you have a winner. Why?
Well, it was reinforcement algorithms that figured out the games of checkers, chess and Go.
Reinforcement learning work on the principle of trial and error. The system will be given a reward of some sort that will help it measure its success rate. In the case of games - the reward will be the scoreboard. Whenever the system wins a point, it evaluates that as a successful move and the status of this move becomes higher. It will keep repeating the loop until all its moves are successful.
And that’s how we have an algorithm that can master the game of chess in 4 hours.
Now we know!
Alright. Let’s take a look at the algorithms themselves:
Now, before we start, let’s take a look at one of the core concepts in machine learning. Regression, when it comes to machine learning regression algorithms, means the algorithm will try to establish a relationship between two variables.
There are many types of regression - linear, logistic, polynomial, ordinary least squares regression, and so on. Today we’ll just cover the first 2 types because otherwise this will be better published as a book, rather than an article.
As we’ll see in a moment, most of the top 10 algorithms are supervised learning algorithms and are best used with Python.
Here comes the top 10 machine learning algorithms list:
It is among the most popular machine learning algorithms. It works to establish a relation between two variables by fitting a linear equation through the observed data.
In other words, this type of algorithms observes various features in order to come to a conclusion. If the number of variables is bigger than two - the algorithm will be called multiple linear regression.
Linear regression is also one of the supervised machine learning algorithms that work well in Python. It is a powerful statistical tool and can be applied for predicting consumer behavior, estimating forecasts, and evaluating trends. A company can benefit from conducting linear analysis and forecast the sales for a future period of time.
So, if we have two variables, one of them is explanatory, and the other is the dependent. The dependent variable represents the value you want to research or make a prediction about. The explanatory variable is independent. The dependent variable always counts on the explanatory.
The point of the linear machine learning is to see whether there is a significant relationship between the two variables and if there is, to see exactly what it represents.
Linear regression is considered a simple machine learning algorithm and is therefore popular among scientists.
Now, there is linear regression, and there is logistic regression. Let’s have a look at the difference:
This is one of the basic machine learning algorithms. It is a binomial classifier that has only 2 states, or 2 values - to which you can assign the meanings of yes and no, true and false, on and off, or 1 and 0. This kind of algorithm classifies the input data as category or non-category. The input data is compressed and then analyzed.
Unlike linear regression, the logistic algorithms make predictions by using a nonlinear function. Logistic regression algorithms are used for classification and not for regression tasks. The “regression” in the name suggests that the algorithms use a linear model and incorporate it into the future space.
Logistic regression is a supervised machine learning algorithm, which, like linear regression, works well in Python. From a mathematical point of view, if the output data of research is expected to be in terms of sick/healthy or cancer/no cancer, then a logistic regression is the perfect algorithm to use.
Unlike linear regression where the output data may have different values, logistic regression can have as output only 1 and 0.
There are 3 types of logistic regression, based on the categorical response. These are:
Let’s see another great classifying algorithm:
This method finds linear combinations of features, that separates different input data. The purpose of an LDA algorithm is to examine a dependable variable as a linear union of features. It is a great classification technique.
This algorithm examines the statistical qualities of the input data and makes calculations for each class. It measures the value of the class and then the variance among all classes.
During the process of modeling the differences among classes, the algorithm examines the input data according to independent variables.
The output data contains information about the class with the highest value. The Linear Discriminant Analysis algorithms work best for separating among known categories. When several factors need to be mathematically divided into categories, we use an LDA algorithm.
The kNN algorithm is one of the great machine learning algorithms for beginners. They make predictions based on old available data, in order to classify data into categories based on different characteristics.
It is on the supervised machine learning algorithm list, which is mostly used for classification. It stores available data and uses it to measure similarities in new cases.
The K in kNN is a parameter that denotes the number of nearest neighbors that will be included in the “majority voting process”. This way, each element’s neighbors “vote” to determine his class.
One of the best ways to use the kNN algorithm is when you have a small, noise-free dataset and all data in labeled. The algorithm is not a quick one and doesn’t teach itself to recognize unclean data. When the dataset is larger, it is not a good idea to use kNN.
The kNN algorithm works like this: first, the parameter K is specified, after which the algorithm makes a list of entries, that is close to the new data sample. Then it finds the most common classification of the entries, and finally, it gives a classification to the new data input.
In terms of real-life applications, kNN algorithms are used by search engines to establish whether search results are relevant to the query. They are the unsung hero that saves users time when they do a search.
Next comes the Tree-Trio: Regression Trees, Random Forest, and AdaBoost.
Here we go:
Yes, they are called trees, but since we’re talking about machine learning algorithms, imagine them with the roots on top and branches and leaves at the bottom.
Regression trees are a type of supervised learning algorithm, that - surprise, works well in Python. (Most ML algorithms do, by the way.)
These “trees” are also called decision trees and are used for predictive modeling. They require relatively little effort from the user in terms of the quantity of input data.
Their representation is a binary tree and they solve classification problems. As the name suggests, this type of algorithm uses a tree-like model of decisions. They perform variable screening or feature selection. The input data can be both numerical and categorical.
Translation, please!
Sure. Any time you make a decision, you transition to a new situation - with new decisions to be made. Each of the possible routes you can take is a “branch”, while the decisions themselves are the “nodes”. Your initial starting point is the primary node.
That’s how a decision tree algorithm creates a series of nodes and leaves. The important thing here is that all of them come from one node. (In contrast, random forest algorithms produce a number of trees, each with its primary node.)
In terms of real-life application, regression trees can be used to predict survival rates, insurance premiums, and the price of real estate, based on various factors.
Regression trees “grow” branches of decisions until a stopping criterion is reached. It works better with small amounts of input data because otherwise, you might get a biased output dataset.
The algorithm decides where to split and form a new branch out of a decision, based on multiple algorithms. The data is split into regions of sub-notes, which gather around all available variables.
The random forest algorithm is another form of supervised machine learning. It produces multiple decision trees, instead of only one like Regression Trees. The nodes are spread randomly and their order is of no significance to the output data. The larger the quantity of the trees, the more accurate the result.
This type of algorithm can be used for both classification and regression. One of the awesome features of the random forest algorithm is that it can work when a large proportion of the data is missing. It also has the power to work with a large dataset.
In the case of regression, these algorithms are not the best choice, because it doesn’t have much control over what the model does.
Random Forest algorithms can be very useful in e-commerce. If you need to establish whether your customers will like a particular pair of shoes, you only need to collect information on their previous purchases.
You include the type of shoes, whether they had a heel or not, the gender of the buyer, and the price range of the previous pairs they ordered. This will be your input data.
The algorithm will generate enough trees to provide you with an accurate estimate.
You are welcome!
And here comes the last tree-system algorithm:
AdaBoost is short for Adaptive Boosting. The algorithm won the Gödel Prize in 2003 for its creators.
Like the previous two, this one also uses the system of trees. Only instead of multiple nodes and leaves, the trees in AdaBoost produce only 1 node and 2 leaves, a.k.a. a stump.
AdaBoost algorithms differ substantially from decision trees and random forests.
Let’s see:
A decision tree algorithm will use many variables before it produces an output. A stump can only use 1 variable to make a decision.
In the case of random forest algorithms, all the trees are equally important for the final decision. AdaBoost algorithms set priority to some stumps over others.
And last but not least, random forest trees are more chaotic, so to speak. Meaning that the sequence of trees is irrelevant. The outcome doesn’t depend on the order in which the trees got produced. In contrast, for AdaBoost algorithms - order is essential.
The outcome of every tree is the basis for the next. So if there is a mistake along the way, every subsequent tree becomes affected.
Alright, so what can this algorithm do in real life?
AdaBoost algorithms already shine in healthcare, where researchers use them to measure the risks of disease. You have the data, but different factors have different gravity. (Imagine you fell on your arm and your doctors use an algorithm to determine whether it is broken or not. If the input data contains both the x-ray of your arm and a photo of your broken fingernail… well, it’s quite obvious which stump will be given more importance to.)
Now, we are out of the forest, so to speak, so let’s have a look at 3 other kinds of machine learning algorithms:
This one comes in handy when you have a text classification problem. It is the machine learning algorithm used when one has to deal with high-dimensional data sets, such as spam filtration or news articles classification.
The algorithm carries this signature name because it regards each variable as independent. In other words, it considers the different features of the input data as completely unrelated. This makes it a simple and effective probabilistic classifier.
The “Bayes” part of the name refers to the man who invented the theorem used for the algorithm, namely - Thomas Bayes. His theorem, as you might suspect, examines the conditional probability of events.
Probabilities are calculated on two levels. First, the probability of each class. And second, the conditional probability according to a given factor.
The Learning Vector Quantization algorithm, or LVQ, is one of the more advanced machine learning algorithms.
Unlike the kNN, the LVQ algorithm represents an artificial neural network algorithm. In other words, it aims to recreate the neurology of the human brain.
The LVQ algorithm uses a collection of codebook vectors as a representation. Those are basically lists of numbers, which have the same input and output qualities as your training data.
These are one of the most popular machine learning algorithms.
The Support Vector Machines algorithm is suitable for extreme cases of classifications. Meaning - when the decision boundary of the input data is unclear. The SVM serves as a frontier which best segregates the input classes.
SVMs can be used in multidimensional datasets. The algorithm transforms the non-linear space into a linear space. In 2 dimensions you can visualize the variables as a line and thus have an easier time identifying the correlations.
SVMs have already been used in a variety of fields in real life:
It sounds like the Swiss knife of ML algorithms, doesn’t it?
Humans and computers can work together successfully.
Researchers assure us that this partnership can, and will give amazing results. Machine learning algorithms are already helping humanity in a number of ways.
One of the most important functions of machine learning and AI algorithms is to classify.
Let’s see the top 10 machine learning algorithms once again in a nutshell:
All these algorithms (plus the new ones that are yet to come) will lay the foundation for a new age of prosperity for humanity. It will make possible (and even necessary) a universal basic income to ensure the survival of the less capable people. (Who will otherwise revolt and mess up our society. Oh, well.)
Well, who would have thought an article about machine learning algorithms would be such a doozy. Well, that was it for today.
See you soon, guys!
Darina Lynkova
Darina is a proud Ravenclaw and a fan of Stephen King. She enjoys being a part of an awesome team of tech writers who are having a ball writing techie articles. She also loves board games and a pint of lager. Currently, she is finishing her second master’s degree, at Vrije University, Brussels (Linguistics and Literature!) while headbanging on quality progressive metal…and banging her head with the intricacies of progressive technologies like AI and deep learning.
Latest from Author
Your email address will not be published.
Updated · Jun 07, 2023
Updated · Jun 07, 2023
Updated · Jun 07, 2023
Updated · Jun 07, 2023