ANDY CLARK
Philosopher, chair in logic & metaphysics, University of Edinburgh; author, Supersizing the Mind
The last decades have seen fantastic advances in machine learning and robotics. These are now coupled with the availability of huge and varied databases, staggering memory capacities, and ever faster and funkier processors. But despite all that, we should not fear that our Artificial Intelligences will soon match and then rapidly outpace human understanding, turning us into their slaves, toys, pets, or puppets.
For we humans benefit from one gigantic and currently human-specific advantage: the huge yet nearly invisible mass of gradually accrued cultural practices and innovations that tweak and pummel the inputs that human brains receive. Those gradually accrued practices are, crucially, delicately keyed to the many initial biases—including especially biases for sociality, play, and exploration—installed by the much slower processes of biological evolution. In this way, a slowly accumulated mass of well-matched cultural practices and innovations ratchets up human understanding.
By building and then immersing ourselves in a succession of designer environments, such as the human-built worlds of education, structured play, art, and science, we restructure and rebuild our own minds. These designer environments are purpose-built for creatures like us, and they “know” us as well as we know them. As a species, we refine them again and again, generation by generation. It is this iterative restructuring and not sheer processing power, memory, mobility, or even the learning algorithms themselves that is the final (but crucial) ingredient in the mental mixture.
To round it all off, if recent arguments by Oxford psychologist Cecilia Heyes are correct, many of our capacities for cultural learning are themselves cultural innovations, acquired by social interactions rather than flowing directly from biological adaptations. In other words, culture itself may be responsible for many of the mechanisms that give the cultural snowball the means and momentum to deliver minds like ours.
Why does this mean that we should not fear the emergence of superintelligent AI anytime soon? The reason is that only a well-structured route through the huge mass of available data will enable even the best learning algorithm (embodied perhaps in multiple, active, information-seeking agents) to acquire anything resembling a real understanding of the world—the kind of understanding needed even to generate the goal of dominating humankind. Such a route would need to be specifically tailored to the initial biases, drives, and action capacities of the machines themselves. If the slow coevolution of body, brain, biases, and an ever-changing cascade of well-matched cultural practices is indeed the key to advanced cognitive success, we need not fear the march of the machines. For the moment, there is simply nothing in the world of the AIs that looks set to provide that kind of enabling ladder.
“Deep Learning” algorithms are now showing us how to use artificial neural networks in ways that come closer than ever before to delivering learning on a grand scale. But we probably need “deep culture” as well as deep learning if we are ever to press genuine hyperintelligence from the large databases that drive our best probabilistic learning machines.
That means staged sequences of cultural practices, delicately keyed to the machines’ own capacities to act and communicate and tuned to the initial biases and eco-niche characteristic of the machines themselves. Such tricks ratchet up human understanding in ways that artificial systems have yet to even begin to emulate.