AI 101: An Attempt to Make It Make Sense

AI (Artificial Intelligence) has actually been around for quite a while. It’s just taken some necessary factors longer to catch up to bring it into the mainstream, namely, data and computing power.

The idea of AI, was born in the 1930’s, when John Atanasoff invented the electronic digital computer. In was another 20 years, however, before the term artificial intelligence was first used, when in 1956, John McCarthy, led the organization of the first AI conference.

Along its path to today, there have been a few false starts, but this was less due to the “capabilities” of AI, and more due to the needed inputs for AI to be successful, namely, computing power and large (unstructured) data sets. Today’s world has both, and they are growing exponentially. These two reasons have catapulted AI into vogue.

Amazon, arguably the most commercial available AI engine, is getting better and better at predicting what consumers want. Their success ratio sits around 5- 7 %, meaning roughly one in twenty products presented to customers are purchased. Imagine if that was 100%? Make no mistake, that is an Amazon ambition. At 100% predictability for success, the decision is no longer how do we present this product, it becomes, how/ when to ship it. We know with 100% probability, they want this.

Creepy? Maybe. Fascinating? Sure. Wrought with opportunities and risks? Absolutely.

Equally important is the admittance that, humans are not very good at explaining how we do certain things. How do you explain how you recognize a face? Pick up a ball? Decide if you want chicken or salad? There are a massive number of tiny decisions that all map up to an overall decision we need to make. Our job is to decide on the optimal solution. AI gives us the probability of which outcome will be optimal and likely.

AI is perfectly suited to replicate this type of learning through trial and error, and a continual re-calculation of how inputs lead to outputs. This provides us predictability- the foundation for any decision making: financial, health, political, military, social, emotional, etc. etc. etc.

The Lingo…

GPU’s (graphical processing units), driven by gaming enthusiasts, have developed the needed processing power for wider adoption of AI, and one needs to only look at the likes of Facebook, google and Amazon to see that “data” is growing at rates never seen, and more and more of it is unstructured and flawed. It’s the perfect storm AI has been waiting for, in order to realize its potential.

As Machine Learning has developed, three “types” have emerged:

  • Supervised learning, whereby a human provides a clear set of inputs and desired outputs for the “machine” to attempt to leverage and produce. As the computer “learns” the human corrects the inputs to achieve the desired output. This is a great type of ML when a known outcome is being sought, such has what are the buying preference of a certain customer segment?
  • Unsupervised learning, whereby inputs are provided but not a desired output (although one likely exists). The difference here is human’s remove the “biases” of what they are seeking and rely on the inputs to determine a predicable outcome, even if it is not what human’s expected. This is a great type of ML when the outcome is unclear but the inputs are clear, such as should we enter this market segment?
  • Reinforcement learning, is akin to “real-time learning”. It is the vaguest type of learning, in that it is actually closest to how real (human) learning occur. Much like how a child learns, this ML provides no inputs or outputs. What humans do provide is “rewards” or feedback, for everything the machine learns correctly. But what the machine “learns” is not structured. This is a great type of ML to generate new knowledge, insights without a set agenda, such as, what market segments should we enter?

Each type still requires humans to guide and reinforce the desired behaviour (of the machine), the difference being in how pedantic and directive a manner the human does so. The downside of more “structured” and “directed” learning is it drives to a desired, and likely biased, outcome and precludes the acknowledgement of data and insights that may “challenge” the desired outcome as incorrect.

Although the concepts of AI/ ML originated with Atanasoff, modern day AI didn’t pick up speed until the 1950’s with Alan Turing’s seminal work and the ”Turing Test”, brought in commercial applications. Associated advancement in neuroscience, bore the idea of a perceptron, and more broadly, Artificial Neural Networks (ANN’s), which in the simplest terms, attempted to mimic the working of a human brain, in particular, how it learns.

Perceptrons work much like a how a human’s brain does. As humans learn, we create heuristics and shortcuts to help us minimize our daily efforts to eat, sleep, communicate, etc. A perceptron seeks to categorize data as well, so as to predict outcomes, and help humans make decisions.

Heuristics are patterns and categorizations we create to minimize effort and maximize safety.  Our brain then makes decisions based on them, without considering variances, and other impacts. If it looks, smells and sounds like a saber- tooth tiger, don’t wait around to confirm it is- just run.

Picture1MLP, or Multi- layered Perceptrons, is at least three layers of perceptrons. These layers are made up of a number of interconnected ‘nodes’ which contain an ‘activation function’. Data & patterns are submitted to the AI engine, via an ‘input layer’. This layer, via multiple entries, communicates to the ‘hidden layers‘, where processing of data is done (through algorithms and weightings developed and maintained by humans). These hidden layers then link to an ‘output layer‘ where the output is collected by humans to make decisions.

The biggest challenge with MLPs is, as more hidden layers are added to create more intelligent and complex predictabilities, adjusting weights and the structure of the weights becomes more difficult. This is exacelty where humans come in and become critical to the successful prediction of AI.

Through MVPs, which continually re- categorize data based on revised weights, algorithms and data sets, we can replicate human learning. Humans learn through a series of interactions with our environment. As does MVPs.

Backpropagation (of Errors), comes from the work of Geoff Hinton in the mid- 80’s, and involves taking a required/ desired output (hence making it only applicable to supervised learning), and looking at all the errors that the propagations have incorrectly categorized. The needed “corrections” are fed back into the AI so it can correct mistakes in its categorization rules. This is done over, and over, and like a human child who is corrected and guided, the system learns and becomes more intelligent.

Support Vector Machine (SVM), is a classical approach to supervised learning, in that is simply categorizes data into two data sets. Examples include text classification and medical classification.

So What?

AI, in all its forms, will be as revolutionary as the combust engine and electricity.

As the internet commoditization information, making it available to anyone with access to the internet, so too will AI commoditize predictability.

The internet provided us with information, but quickly the information became overwhelming and often erroneous. To date, only larger organizations with teams of data engineers (10k and growing at google), could make sense of all this data, and even then imperfectly.

But AI loves masses of unstructured data. And as algorithms and open AI movements, such as Humans for AI and Partnership on AI, become more viable and available to the masses, we will begin to see AI taking up a critical role in our everyday lives, and allowing humans to focus on implementing what’s best not necessarily defining it.

Although, as mentioned, we will always be part of the design of the decision making architecture. Like the combustion engine and electricity, disruption and loss will occur, but so too will whole new levels of productivity and quality of life, if we tread carefully and deliberately.

Maybe AI can even help in that?











we create a heuristics that if it’s big and hairy, and has large teeth, it’s likely dangerous- run.


For the next 30 years, however, AI/ ML stagnated until the introduction of multi- layer perception (MLP), and a learning technique called backpropagation. As the name suggests, MLPs have at least three “layers” of data, including inputs and outputs and layers of “learning data sets” in between. As the outputs >>>>>>








Enabling factors of data in both volume and media.


We have wonderful examples of narrow AI. But we’ve not yet cracked the need for AGI (artificial general intelligence), or that AI that can be applied to a variety of uses, much like how a human can diversify their activities on a variety of solutions, and cross pollinate successes.



Using machines to do things we think are intelligent.


Alan Turing popualrized the term in the


Some key areas

  • Physical AI, including robots and autonomous vehicles
  • Computer vision, such as imaging and video processing
  • Natural language processing, such as interpretation of spoken and written communications
  • Conversation interfaces, such as call center bots

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s