Welcome to a new installation of tutorials! In “AI Artifacts”, we’ll take a closer look at Artificial Intelligence (AI), Machine Learning (ML), and Neural Networks (NN). In turn, we’ll look at what each of these topics entails and we’ll go through some specific examples of each. Let’s start with some high-level definitions:
- Artificial Intelligence (AI): A broad field within the realm of computer science in which processes run on a computer system mimick human intelligence.
- Machine Learning (ML): A computer system or program capable of adapting and learning from a dataset to solve a problem without following an explicit set of instructions.
- Neural Network (NN): A specific type of problem solving technique in which a computer is modeled on the human brain and nervous system.
From these definitions, we first notice that AI is an extremely broad field encompassing all of ML, NN, and DL much like a Matryoshka doll:
Artificial Intelligence (AI)
Based on the extremely broad definition of Artificial Intelligence, we quickly realize that AI encompasses much more than ML, NNs, and deep learning. In fact, these sub-topics represent a very small subset of AI. For instance, let’s say we are handed a photograph and are asked to find the straight lines in an image:
In this image, the straight repetitive lines could be found through a number of techniques, including X- and Y-direction gradients, Fourier transforms, or template matching. Although these accomplish the same feat as a human finding the same lines, none of these techniques involves machine learning or neural networks. If we further reduce this logic, we could say that even the most mundane computer tasks could be labeled as AI — just as a human can add 2 and 2 to get 4, the “add” function in any program could achieve the same result. Since human intelligence is mimicked in this situation, it is argued that addition on a computer is also a form of artificial intelligence.
At the same time, the term “artificial intelligence” is a hot topic of research today, and often carries with it a certain connotation or inflection. When AI is mentioned, it often conjurs up thoughts of an algorithm being “intelligent” or using “learning” techniques. This is so true that the phrase “artificial intelligence” is often used interchangeably with “machine learning”. This mistake is unfortunate, inaccurate, and often muddies the waters of understanding.
Machine Learning (ML)
Compared to artificial intelligence, the topic of machine learning is much more specific. Here, we are specifically talking about algorithms that have the capability to learn and adapt — either supervised or unsupervised — as new data is presented. But just as there are many different ways for people to learn or adapt their problem solving in life, there are many flavors of ML algorithms! Some of the more popular algorithms are listed below:
- Linear or logistic regression
- Decision trees / random forest
- Support Vector Machine (SVM)
- Naive Bayes
- K-Nearest Neighbors (KNN)
- Dimensionality reduction
- Gradient-boosting / AdaBoosting
In a future post, we’ll discuss ML a bit more in-depth and each of these topics will receive their own description. For now, though, we just want to be aware that many different types of ML algorithms exist. Similar to how ML was not the only AI technique, neural networks are not the only technique used in ML!
Neural Networks (NN)
After examining AI and ML, we shouldn’t be surprised to see that the topic of neural networks is widely diversified. Much like ML, we can break neural networks down further into subcategories based on the structure of the network we’ve constructed. There are so many, in fact, that listing them here would seem repetitive! A future post will attempt to present a more exhaustive list of different neural network models.
To get a taste of the variety, we could start with the basic perceptron model. This model is the most simple NN, with two input nodes and one output node. On the other hand, a Hopfield Network only consists of input nodes, but each input node is connected to all the other input nodes like a spider’s web! A third network type, the Deep Convolutional Inverse Graphics Network (DCIGN) consists of multiple hidden layers, input and output nodes, convolution nodes, and probability nodes! In total, there are well over 25 model variations of neural networks. Similar to our ML introduction, we’ll postpone our discussion on specific NN structures to a later post.
In this tutorial, we took a very, very brief look at the relationship between artificial intelligence, machine learning, and neural networks. We’ve defined each topic, compared their definitions, examined their differences, and looked at examples of algorithms within each field.
In our next post, we’ll begin picking apart our main topic of interest: machine learning. So come along and learn with me as we uncover each field piece by piece. To stay updated as I post, feel free to Like, comment, and subscribe! See you next time, and thank you for joining me — I’m excited to see where we go with this!
Get new content delivered directly to your inbox.
(Header image: Robot hand by rawpixel.com)