Dire warnings abound as experts tell us that Artificial Intelligence is almost a reality, if it isn’t already. Leaders in the technology, such as Elon Musk have even called for a 6 month moratorium on AI research, citing existential threats.
But in fact, this consensus is a myth. Not everyone who studies the field is convinced that AI is at hand, or that it is even ever possible.
In 2020, Ragnar Fjelland, Emeritus Professor at the Centre for the Study of Sciences and the Humanities at Bergen University wrote an essay for the journal Nature, titled “Why general artificial intelligence will not be realized.” It is long and complex, but very much worth reading as a caution not to accept the reality of AI too easily.
Drawing on the work of scientists and philosophers dating back to Plato, the line from Fjelland that stands out is, “to put it simply: The overestimation of technology is closely connected with the underestimation of humans.” This means that in our rush to declare the reality of AI, what we are really doing is dumbing down the very concept of human intelligence.
MARK WEINSTEIN: THREE WAYS TO REGULATE AI RIGHT NOW BEFORE IT’S TOO LATE
The modern debate over AI began with mid 20th century scientist Alan Turning who devised a set of tests. Most famous was the ability for AI to fool a human being into thinking they were speaking to another human being. This has more or less been achieved, but it is a deeply insufficient test to establish that a computer is engaged in human style intelligence.
Can a computer today spontaneously crack a funny joke? Can it accidentally commit a Fruedian slip, recognize and reflect on it? Can it dream? The latter is a telling example of how science has put the AI cart before the horse of human intelligence. There is no consensus on what exactly a human dream is, or why they exist. How then can we possibly establish if a computer is capable of it?
Moreover, much of human knowledge and intelligence is tacit, not explained or devised. For example, as Fjelland points out, most humans know how to walk, but very few know how they walk. We do not teach our toddlers perambulation by showing them the math and physics of it.
This is knowledge gained by experience with physical phenomena, not through pure mental exercise. In large part the vastness of human intelligence is not so much contained in what we know, but in what we don’t know and yet can do anyway.
A significant reason why we do not hear these questions asked is that the experts we most often rely on to tell us if AI is real, or achievable, are themselves experts in AI. Of course they think it’s real. They have dedicated their careers to it, their funding depends on it, which doesn’t mean they are wrong, but it does mean they are an interested party in the debate. And that others, such as philosophers and theologians have a role to play in these definitions.
None of this is to suggest that machine learning will not have a major and potentially dangerous impact on society. If hundreds of thousands of truckers lose their jobs to self driving vehicles it’s a problem. But it’s not a new problem. Technology has been displacing human work since the ancients invented the plough. And anyway, self-driving vehicles do not actually require artificial intelligence.
The far more important and complex questions involve creativity and intuition. The comical columns concocted from ChatGPT don’t suggest that an artificial William Faulkner or James Joyce is right around the corner, or achievable at all. Furthermore as we can see from the consistently politically biased responses to prompts that the system gives, there is clearly more than a little human influence on the end product.
Might artificial intelligence be real and dangerous? Perhaps. But there is also enormous danger in human beings holding the capacities of their own intelligence too cheap. AI is not a functioning model of the human mind, and dispossessing ourselves of that notion is key to understanding our technological age.
Will there, one day, be a computer that can match the marvels of Shakespeare? For his part, the Bard thinks not. “What a piece of work is a man,” he wrote, “How noble in reason, how infinite in faculty, In form and moving how express and admirable, In action how like an Angel, In apprehension how like a god.”
Try though they may, all the Elon Musks and all of their men, cannot create a computer that can compose or meet the criteria of that description of human intelligence. Human beings are still, first and foremost the greatest storytellers of their own reality, and there is no good reason to believe that can, or will ever change.
CLICK HERE TO READ MORE FROM DAVID MARCUS
Read the full article here