We will always be curious about the future. People wonder, which is generally a good thing, until they go and ruin it by making predictions.
I could provide you with a litany of quotes from people viewed as thought leaders who said things that are laughably wrong.
One of my favorites: “Two years from now, spam will be solved.” — Bill Gates, World Economic Forum, 2004.
That alone should have me resist pontificating about the future, at least in public. And yet, when I have an audience, I cannot resist.
No doubt, when the Southern Economic Development Council meets in Dallas in about two weeks, and I will be on a consultants’ panel discussing manufacturing, I will point my index finger skyward and say, “I believe …”
Which reminds me of an old Steve Martin standup routine (abbreviated):
“I believe in rainbows and puppy dogs and fairy tales.
“And I believe 8 of the 10 Commandments.
“And I believe in going to church every Sunday, unless there’s a game on.
“And I believe in equality, equality for everyone … no matter how stupid they are, or how much better I am than they are.
“And … I believe that robots are stealing my luggage.”
It’s that last point where I think he might be onto something.
AI Will Change Us
My “I believe” statement, which I may make at the upcoming SEDC conference, is that artificial intelligence (AI) will not only transform manufacturing, but it will change all of lives for the better or for the worse, depending on we how control it. And we best control it.
AI is the ability of machines to be “smart,” to learn, imitate, and dramatically accelerate or replace human decision making and behavior. Machine learning refers to teaching computers how to analyze data for solving particular tasks through algorithms.
Data is the lifeblood of AI. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behavior.
But the machines in themselves cannot distinguish between good and evil, and there are bound to be some people who are feeding them with data and instructing them with tasks that have unsavory motives.
It’s precisely for that reason that more than 100 leaders of AI companies, including Elon Musk, have signed an open letter to the United Nations, voicing concerns that companies building AI systems could convert the technology into, I am not making this up, autonomous killer robots.
This goes well beyond stealing your luggage.
Is There a God?
The fact that there could be malicious use of AI, probably would be, was the warning from the late Stephen Hawking. In a 2014 interview with comedian John Oliver, the world-renowned theoretical physicist displayed a wonderful sense of humor.
“There’s a story that scientists built an intelligent computer. The first question they asked it was: “Is there a God?” The computer replies: “There is now.” And a bolt of lightning struck the plug so it couldn’t be turned off.”
Still, there are “experts” (people who point their fingers in the air and say, “I believe ….”) who contend that the threat of AI is not real and that an AI Terminator is perhaps hundreds of years away, if at all.
We know that AI is not a matter of just installing software. It requires certain levels of expertise, vision, and information that few of us possess.
And certainly, very good things can come from AI, from self-driving vehicles, drones overhead, traffic management, preparing tax returns, identifying and treating rare cancers, setting up meetings. The list goes on and on and will invariably grow.
But bad things, mischievous things can also result. We know of videos generated by machines that have President Barack Obama saying things that he never said. We know that machines can learn from news, social feeds and just from listening to us around the house, (Alexa, I am unplugging you) and thereby deliver targeted ads aimed directly at us, based on our likes and dislikes.
Determining what is true and what is not may become only become more difficult as “fake news” will proliferate beyond the realm of our traditional news media gatekeepers.
Some “experts” (finger pointers all) say we are currently in our fourth industrial revolution. The first, beginning in the 1760s, was characterized by mechanization, water power and steam power; the second, started in the 1870s was characterized by mass production, assembly lines and electricity.
The third industrial revolution got its start in the 1950s with computers and automation; and now we’re in the fourth, aka Industry 4.0, characterized by artificial intelligence and deep machine learning.
In every one of these industrial revolutions we have had the loss of jobs and the creation of new ones. In that regard, disruption is not new. Somehow, we have always been able to figure it out, to adapt.
But this latest industry revolution may be different in that the technology we unleash may be somewhat mysterious even to its creators. And there is a chance, and I know this sounds outlandish, that we could lose control of the machines. More on that in a moment.
Will AI change your job? Yes, probably so. Will it be slow and gradual? Well, I’m not so sure. Most AI experts agree that they would never have thought any of the major achievements in AI would have happened so quickly.
“The rate of improvement is really dramatic, but we have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that’s the single biggest existential crisis that we face, and the most pressing one,” warns Musk.
Which begs the question, should we not be imparting to the machines a certain level of human ethics? Algorithms may not be free of the biases of their programmers, but should we teach, guide, and provide socially acceptable boundaries for the AI systems that we use? In short, can we, should we, input some basic goodness into the machines so that they will not, well, turn on us?
Those might sound like ridiculous questions on their face, except for the fact that no one really knows how the most advanced algorithms work. Now here is where it gets spooky.
Will Knight, a senior editor for AI at MIT Technology Review, tells the story of a self-driving car developed by the chip maker Nvidia that didn’t follow a single instruction provided by an engineer or programmer.
Instead, the car relied entirely on an algorithm that it taught itself by watching a human drive. The researchers working on the project found that a bit, well, unsettling.
The CEO of DeepMind Technologies Limited, a British AI company owned by Google, reported in December that his company had developed an algorithm, called AlphaZero, that achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.
AlphaZero made moves unthinkable to a human chess player, said Demis Hassabis, the founder and CEO of DeepMind and an expert chess player himself.
“It doesn’t play like a human, and it doesn’t play like a program,” Hassabis said at an AI conference in Long Beach, Calif. “It plays in a third, almost alien, way.”
Last year, Facebook shut down an experiment after two AI programs appeared to be chatting to each other in a strange language that only they understood. The two chatbots created their own changes to English that made it easier for them to work – but which remained mysterious to the humans who were there to oversee them.
This raises the spectrum and poses a question: Could we actually lose control? Could something akin to Hawking’s lightning bolt happen in which we could not unplug?
As AI becomes more commonplace, “I believe” (I am pointing my index finger skyward) that machines will learn to talk to each other, drive cars, beat, dream, filter applicants for a job, paint pictures, tell stories and help make scientific discoveries. They may also do corporate site selection, an area of focus for me. These are all things the machines have already started to do.
And in the process, they may also confound us, their human creators, with mysterious “alien” behavior. We should watch for that very carefully. I know I do not want to lose my luggage to some larcenous robot.