When you are in business, you need to develop a perspective on AI. I like “The fourth age”, but there are a few dark ones as well, such as “Technology versus humanity“. Hence The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by Amy Webb. A book about AI.
When you talk about AI and the future of AI, the question of consciousness always comes up. Our human wiring is the result of millions of years of evolution. Can that be replicated by a machine? You are a biologically unique person whose salivary glands and taste buds aren’t arranged in the same order as mine. Yet we’ve both learned what an apple is, and the general characteristics of how an apple tastes, what its texture is, and how it smells. We will both experience an apple differently. We do not perceive the same reality. Which is the first problem, which reality do machines live in?
But there are other questions:
- How could we verify our own consciousness?
- What proof would we need to conclude that our thoughts are our own and that the world around us is real?
- Can machines become self-aware? What is awareness?
- Do mind and machine simply follow an algorithm?
- The human brain has metabolic and chemical thresholds, which limit the processing power of the wet computers inside our heads. How far could processing power bring AI
It got scary once we invented deep learning. Siri, Google, and Amazon’s Alexa are all powered by deep learning. Just like a human player new to the game, AlphaGo learned everything from scratch, completely on its own, without an opening library of moves or even a definition of what the pieces did.
AlphaGo has beaten a professional Go player 5–0. And it had won by analyzing fewer positions than IBM’s Deep Blue did by several orders of magnitude. When AlphaGo beat a human, it didn’t know it was playing a game, what a game means, or why humans get pleasure out of playing games.
They moved on to Alpha Zero. It took only 70 hours of play for Zero to gain the same level of strength AlphaGo had when it beat the world’s greatest players. It not only rediscovered the sum total of Go knowledge accumulated by humans, but it also beat the most advanced version of AlphaGo 90% of the time—using completely new strategies. That means that Zero evolved into both a better student than the world’s greatest Go masters and a better teacher than its human trainers, and we don’t entirely understand what it did to make itself that smart.
5,000 Elo rating
A Go player’s strength is measured using something called an Elo rating, which determines a win/loss probability based on past performance. Grandmasters and world champions tend to have ratings near 3,500. Zero had a rating of more than 5,000. Comparatively, those brilliant world champions played like amateurs, and it would be statistically improbable that any human player could ever beat the AI system. The achievement was architecting a system that had the ability to think in an entirely new way and to make its own choices. Once Zero took off on its own, it developed creative strategies that no one had ever seen before, suggesting that maybe machines were already thinking in ways that are both recognizable and alien to us.
We don´t know
Deep-learning processes happen in parallel and are not observable by AI researchers in real-time. Someone would have to build the system and then trust that the decisions it was making were the right ones. We don’t know what it is “thinking”. What Zero also proved is that algorithms were now capable of learning without guidance from humans, and it was us humans who’d been holding AI systems back. It meant that in the near future, machines could be let loose on problems that we, on our own, could not predict or solve.
Meanwhile, AI researchers in a different division of Alphabet called Google Brain revealed that they had built an AI that’s capable of generating its own AIs. Self-replicating AI. We are crossing a threshold into a new reality in which AI is generating its own programs, creating its own algorithms, and making choices without humans in the loop. The Big Nine are nudging and manipulating your behaviour on a grand scale. Who knows if an independent AI is not already doing the same?
Thinking machines can make decisions and choices that affect real-world outcomes, and to do this, they need a purpose and a goal. Eventually, they develop a sense of judgment. These are the qualities that, according to both philosophers and theologians, make up the soul. And yes, thinking machines are capable of original thought.
G-MAFIA and BAT
More scary news. The development of AI is in the hands of Google-Microsoft-Amazon-Facebook-IBM-Apple (G-MAFIA) and Baidu, Alibaba and Tencent (BAT). American multinationals and the Chinse government. Out of control capitalism combined with a totalitarian government.
That is concerning all by itself, but that also means that the AI is build based on monocultures. Conway’s law says that in the absence of stated rules and instructions, the choices teams tend to reflect the implicit values of their tribe. If you, or someone whose language, gender, race, religion, politics, and culture mirror your own, are not in the room where it happens, you can bet that whatever gets built won’t reflect who you are. How are humanity’s billions of nuanced differences in culture, politics, religion, sexuality, and morality being optimized? In the absence of codified humanistic values, what happens when AI is optimized for someone who isn’t anything like you?
Problems on the road
Right now, the Big Nine are building the legacy code for all generations of humans to come, and we do not have the benefit of hindsight yet to determine how their work has benefitted or compromised society. We mistakenly treat artificial intelligence like a digital platform—similar to the internet—with no guiding principles or long-term plans for its growth. We have failed to recognize that AI has become a public good. Failing to treat AI as a public good—the way we do our breathable air—will result in serious, insurmountable problems. AI is rapidly concentrating power among the few, even as we view AI as an open ecosystem with few barriers. With AI, anyone can build a new product or service, but they can’t easily deploy it without the help of the G-MAFIA.
Artificial intelligence is typically defined using three broad categories: artificial narrow or weak intelligence (ANI), artificial general intelligence (AGI), and artificial superintelligence (ASI). The transition from ANI to ASI will likely span the next 70 years.The evolutionary algorithm will keep generating, discarding, and promoting solutions millions of times, producing thousands or even millions of offspring, until eventually, it determines that no more improvement is possible. Evolutionary algorithms with the power to mutate will help advance AI on its own
AI will become unrecognisable
Depending on whom you talk to, the maximum operations per second our human brains can perform is one exaflop, which is roughly a billion-billion operations per second. Those ops account for lots of activities that happen without our direct notice. Unlike computers, we can’t easily change the structure of our brains and the architecture of human intelligence. At our current rate, it will take humans 50 years of evolution to notch 15 points higher on the IQ scale. But within that same timeframe, AI’s cognitive ability will not only supersede us—it could become wholly unrecognizable to us because we do not have the biological processing power to understand what it
In the long evolution of intelligence and our road to ASI, we humans are analogous to the chimpanzee. Superintelligent AI would likely make decisions in a nonconscious way using a logic that’s alien to us. An ultra-intelligent machine could also design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.
The coming “intelligence explosion” describes not just the speed of supercomputers or power of algorithms, but the vast proliferation of smart thinking machines bent on recursive self-improvement. Imagine a world in which systems far more advanced than AlphaGo Zero and NASNet not only make strategic decisions autonomously but also work collaboratively and competitively as part of a global community. Former DARPA program manager Gill Pratt argues that we’re in the midst of a Cambrian explosion right now—a period in which AI learns from the experience of all AIs, after which our life on Earth could look dramatically different than it does today.
Amy Webb suggests three scenarios. The optimistic, the pragmatic and the pessimistic. And China plays a pivotal role in all of these scenarios. From utopia to a version of “Future crimes“. Where AI runs us instead of the other way, where platforms split societies, where you can´t trust machines or data, where nudging becomes nagging, splinternets, augmented humans, digital glitches and ultimately digital occupation. The pessimistic version combines China, brain interfaces, nanotechnology and annihilation. An AI version of “Ghost fleet” (highly recommended). For that reason alone you should read the book. You will never look at machines and electronics in the same way.