As a learning experience designer, I’ve been gazing into the future for years, and I’m a little freaked out.
Before us lies the horizon of artificial intelligence. Supposedly we want AI because it is going to revolutionize our work and make the world seamless, integrated and progressive. We have heard that promise before, no? But are we creating a heaven on earth, or digging our own virtual graves?
Perhaps check in with Sophia, the “hot AI robot,” who was asked recently if she wanted to destroy humans. She gleefully replied that annihilation of the species would be most excellent.
And, please pass the computer chips: Google’s DeepMind AI AlphaGo bested the world’s best human Go player, Lee Se-dol of South Korea. At one point, DM was up 3-0. Lee clawed back to win the last game, and they nearly had a worldwide parade in celebration.
Hooray for him. But you see which way this is going.
If machines don’t need us, how do we co-exist after that happens?
I’m known as a post-futurist in my field. Recently I spoke on “LX Design: Because Learning Design Requires UX” at South by Southwest Interactive in Austin. The same day, the Austin American-Statesman featured my opinion piece, “Saying Goodbye to the Future,” in which I suggested that the future is a place that needs to be created by the many, not just the few.
But then I may be alone in the wilderness here. I have been throwing myself into the future for years and seeing what bounces back. Sometimes I like what I see. Sometimes I am freaked out.
AI is supposed to be the answer, but what if we don’t understand the implications of the question? What if our presumed salvation changes us in ways we cannot now conceive of?
Let’s start with this: Throughout evolution, humankind has always evolved within the human brain. Taking this down to one individual, a man feels the impact of a multitude of influences every moment of every day. Even in that bombardment scenario, that man retains his own singular thought-processing capacity. He decides what he invites into his domain. His cognitive abilities and capabilities of thought are his, and his alone.
Now, welcome to the speed of light, where all of that changes. Advances in medicine and technology, exponential increases in volume of information and speed of communication, access to resources, social mobility and media influencers have—often indelicately—forced us to process our worlds faster and faster.
But it’s all about to go even faster than that. We are closer every day to technology singularity, where artificial intelligence becomes recursive and self-designing. Yes, it is a hypothetical event, but when it arrives, it will have real consequences for us. Someday your coffeemaker will tell you how illogical you are this morning.
Mark that on your calendar, Hal. It’s going to be a big day. As early as 2030, some say, when machines surpass human capability to think and process information. We have known this for a looooong time—hey, Stanislaw Ulam in 1958—yet we think about the technology and profitability of it rather than the human impact. When your job is gone because AI has been taught to think significantly faster than you do, you will feel way more than discomfort. There is no going backward once that horizon is reached.
We tend to be greedy and egocentric. I’m sure the Romans, besotted with avarice, rarely stopped to think whether they should be so self-serving (and we all know how that turned out). They did stuff because it was fun and decadent and lucrative, and to hell with everybody else. Modern society may have better health codes, but we’re still too often feeding at the trough.
There is a pervasive sense of … unease that has lived among us since the first movies depicting ferocious killer ants and giant radioactive women (who somehow kept their clothes on) flickered on movie screens in the ’50s. Your grandparents’ alien, Klaatu of “The Day the Earth Stood Still,” and his ilk might have left them creeped out by the thought of an intelligence different from their own and beyond their capability to comprehend. We can only speculate that they would have felt threatened by the idea of “foreigners” moving into the neighborhood.
Why should the idea of artificial intelligence be so unsettling? Why not welcome the idea of familiarity with another intelligence, one that will enhance and expand our global community? Why should no less than Stephen Hawking, Bill Gates and Elon Musk warn us of its immense existential threat? Is AI our generation’s answer to the nuclear bomb, as Musk suggests? Will we get 10 or so years down the road and say, as Robert Oppenheimer did, quoting the Bhagavad Gita: “Now I am become Death, the destroyer of worlds”? Why are the great minds of our time freaking us out like this?
As an LX designer and newly minted “educational post-futurist,” my job is to look past the horizon. I have to be ahead of the game. I spoke at SXSWedu about futurism, yet these are not abstract ideas to me—they are connected to the learners I work with every day. What I am talking about will alter the genetic makeup of the human brain. If you now process information in relationship with another persona, even if it is a form of technology, it becomes a shared experience between your thoughts and the construct and behavior of artificial intelligence that shares your conceptual space. I believe we are no longer alone. What if the aliens we’ve always feared are already here?
Will AI prove to be good or bad? It’s like anything with immense power: Fire, once discovered, was able to comfort or destroy. In our rush to create technologies and profit from them—or serve people who profit from them—we have had precious little time to confront the ethical questions that reside in this vast desert of ambiguity. It is quiet in the desert, save the occasional explosion generated by the Next Greatest Technology Ever. For 70 years we have walked the tightrope of nuclear proliferation, knowing full well that once immense power is released into the wrong hands, it is too late—midnight has struck. George Orwell warned of a dystopian future from way back in 1949, of a world in a perpetual war where the opponent no longer even matters, of Big Brother watching our every move.
From my recent work with machine learning and predictive adaptive algorithms I can tell you this: He wasn’t kidding. If I look at my wrist, I see my Fitbit. This ever-more ubiquitous device relays granular, highly personal information and data about me to … someone, every minute of every day, waking or sleeping. Any honest discussion of ethics—and the permissions we afford to those who gather our data, sell us technology and put us in learning environments—will raise essential questions. More than anything, the questions of “Who am I?” and “How do I interact with technology environments?” are of critical importance to the individual.
Never have we had to do one potentially terrifying thing: share personas with a secondary source. Shared personas is the most critical question I know of: You may be influenced by data or persuasive UX design to participate in an informational or educational activity, when profit motive suddenly impedes your ability to take action (Want to watch a video of Bowie? Sorry, sit through this Tide commercial). People have been working to figure out human behavior since the days of B.C.E. and then interpret that behavior for profit. Even with all the fancy strategic design strategies we employ to see how you use our websites, we always have to interpret your behavior and give it meaning.
We are now taking a step past that, as we will no longer look “at” technology, but “through” it. It will be a part of how you see and recognize the world around you, and ultimately how you interpret it. Soon you will not just be wearing a device, but will have it implanted—and the amount of data that can be captured about you, both with and without your awareness, is almost unfathomable.
A brilliant data analyst and mathematician friend of mine says that the person who understands and interprets data will be your best friend—or your worst enemy. You will have no way of knowing which one, because interpretation always brings with it another perspective and another set of self-interests different from the raw data itself. Scientists who note the Observer Effect discuss the phenomenon that you affect the experiment simply by observing it, and although the jury is still out in the quantum world, data is always subject to human subjectivity. Exactly what my sleep patterns are telling someone in front of a computer screen in the Data Analysis Room of Fitbit’s Ministry of Information, I can’t say. This is a thorny issue, but not in the realm of shared persona as I introduce it to you here.
What happens when you are in an unconscious relationship with another intelligence? As an evolutionary step, we go from using a technological tool to being in a persona-based relationship with an “other.” When your thought processes are shared with a grouping of algorithms that adapt to you and feed you information—and you make decisions, ever more realistic and cunningly presented to you as authentic, based on this shared relationship—you are no longer your own person. You are a shared persona.
It blows my mind that we are not thinking or talking about the ramifications of this. In my new book School of You I stress that learning is a tool of survival. And not only do I know this is true, but if you put it into the context of technology and strategy it becomes ever-more apparent how learning and performance profit.
We need to understand the impact not of an alien race of spacemen who come in saucers, but the alien soon to be within us: the presence of another persona intelligence that we will meld into and become more unconsciously one. And if we do not slow down enough to think of how it will affect us as human beings, we may end up saying something similar to Stephen Hawking: “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”
If Mark Zuckerberg is putting more than $450 billion into adaptive design strategies, the money will go toward speeding development to boost the bottom line, not generating rational discussion of whether this makes us less human. Every society, when it became greedy, did not expand but rather contracted.
Shared persona will soon be an old-fashioned idea, like 1984. When man and machine become one, it will be an evolutionary step beyond conception. We are soon to be not only ourselves, but collections of personas we interact with and that are part of our learning environment. It’s not just money. It’s not just machines. It’s evolution. We must think about the relationship between man and machine, or it will be decided for us.