It doesn’t seem a day goes by without news and editorials about the replacement of middle- class jobs by “machines” (physical and digital, including AI). The reality is, roles of a manual and repetitive nature, and anything that can be reduced to a formulaic process are [at risk of] being replaced by machines. As we saw in Part I, when through its “initial” learning phase, AI gets better and better at predicting outcomes and giving us probabilities to make better decisions. They become the foundation on which our role becomes that of a decision- maker and prediction & choice architect.
After all, aren’t all strategic decisions about trade-offs, imperfect information and a risk-to-reward ratio? AI limits the severity of risk by predicating their likelihood, one- half of the equation for severity along with impact. AI cannot lessen the impact, but it does tell us, better than humans, the likelihood.
But, AI still needs humans. Very much so.
AI, without a need for formulaic automation, will move up the value chain to roles that require the cross- referencing of information for insights, for pattern recognition, predictability of outcomes, and a host of other roles which are presently held by individuals with 2 or 3 degrees, and a variety of experience. They also are skilled in softer, more engagement related skills, including empathy. And this is where AI falters.
“But, just because we can be more effective at something, as a machine might tell us, does not mean we should. Think about the slave trade. Machines and AI would have made the industry very efficient. But they wouldn’t stop to think, should we be doing this?”
Humans may lose their role as the producers of output, both physical and digital, but they will still define the variables, inputs, outcome weightings (weighing one possible outcome as favorable to another), and make the overall decisions that the AI predications support.
Outcome weighting will be a key role for humans in the future. AI will be critical in predicting the outcome of a potential car accident, given a set of inputs such as weather and speed. But AI will not be able to tell if hitting a child is a better option than a tree, especially given, a tree would be more damaging to the car than a child. Human empathy is required to ensure the right decisions are made (in this case, never hit the child), not necessarily the most effective.
Better decision- making is the output of better predications, but those predications are defined by the needed outcomes to make those decision- making. And that is key role for humans.
Judgement and predication go hand and hand, and that is the role of both machines and humans. Humans may no longer conduct some medical examinations, but they will set the parameters for what is an acceptable risk. Hiring may be better done through machine based interviews, that are void of bias, but won’t tap into “cultural” fit. And judicial rulings may be less biased as well, but they won’t take into account empathy and circumstance the way a human would guide them to.
AI won’t replace us, it will augment and help evolve us.
Stay tuned and we’ll discuss some tools on how.