I wrote the following piece with the intention of publishing as an opinion article in some mainstream media outlet. It would have required some more editing and developing – but since I now have my own blog, I’ve decided to put it here. -mert
I saw glimpses of genius in my early interactions with Large Language Models like ChatGPT. Until I started to notice the flaws. And our conversations, once stimulating and informative, began to irritate me. Yet I was hooked. As an academic scientist who has worked on Artificial Intelligence (AI) for over two decades, I could now imagine a future where machines exceled at any cognitive task that we, as humans, can do. Not just the boring and tedious ones, like writing emails or doing arithmetic. But those that demand innovation and creativity, like writing poetry and doing science. Optimists argue that this type of artificial superintelligence, or ASI, will bring us to the promised land, allow us to unlock the mysteries of the universe, and solve all our problems. Pessimists worry that it will spell the end of humanity as we know it and an evil ASI will enslave us for its own good.
Importantly, though, there seems to be an emerging consensus that ASI is around the corner, and we have figured out how to get there. We need to do that quickly, so that we can make sure the ASI is good and serves our needs. That is why Big Tech is rushing to build hyper-scale datacenters to devour all our data, believing that this will, soon enough, give birth to ASI. Yet we are betting the farm on a mirage.
The proposal is tempting. That we have figured out how to create an all-knowing artificial brain that can make the best decisions, give the wisest advice, solve the hardest problems, and do the most mind-numbing tasks, with incredible speed and efficiency. There seems to be an implicit understanding that like the Manhattan project or the landing on the moon, we know what the finish line for ASI looks like. The problem is we don’t. We are chasing shadows. We think we can enumerate all the different capabilities of our brains and turn them into quantifiable tests. The moment we do this though, we realize there are all these other aspects of our cognitive worlds that we are not capturing.
Our brains don’t work in a vacuum, on a stream of clearly defined problems. The human existence happens in a biological body, while perceiving and interacting with an ever-changing physical world, and, importantly, shaped by our societal norms. We can all agree that the human brain is proof that intelligence is achievable. Yet its capabilities are not constant, universal, or even completely measurable. Thus, we should have no reason to believe that an ASI, which is supposed to represent the pinnacle of intelligence across every imaginable cognitive domain, can exist. No wonder, then, AI researchers, like myself, often complain about constantly moving goalposts.
Big Tech’s big bet has another secret. It rests on the hypothesis that we do not need to list or measure cognitive tasks one by one to reach ASI. We merely need to feed a gigantic neural network model with massive amounts of data, set some generic objectives; and, miraculously, intellectual capabilities emerge. This so-called “scaling hypothesis” is convenient because it reduces the pursuit for intelligence to computation, i.e., crunching numbers, which we have now mastered. It also dispenses with the need to develop theories and models of the world, which is the never-ending struggle of science.
Our supposed march toward ASI has therefore no precedence – not because once complete, it will represent the mother of all technologies. But because, our scientific and technological progress has historically relied on testability and verifiability — and the pursuit of ASI lacks this type of rigor. Furthermore, the scaling hypothesis that we are betting on is based on magical thinking and discards a universal law in nature: growth, no matter how fast at first, will always slow down. There are some recent signs that, on the tasks we have tracked so far, we have reached the phase of diminishing returns with our current AIs.
To be sure, our brains are capable of incredible feats and trying to replicate these with artificial technologies can be immensely beneficial – both in terms of the utility of these technologies and for gaining insights into the infinite complexity of the human existence. However, I believe that scaling our way to ASI is a pipe dream that is wasting too many valuable resources, negatively impacting our environment, and distracting us from working on real and immediate problems, where we can agree on goals and measure progress. Instead of spending hundreds of billions of dollars chasing the fantasy of a God-like technology, we could be building specialized AIs to tackle important, yet concrete questions like how to prevent Alzheimer’s disease and produce sustainable energy cheaply and at scale. This type of focused AI research, I believe, is what we should be investing in.
Leave a comment