AI is BS: the Singularity is not Near

cod3Ninj@
6 min readMar 30, 2024

--

Photo by Steve Johnson on Unsplash

AI or “artificial intelligence” has been the subject of gross over-hype and under-delivery since the term was coined. In this article, I will attempt to roll back the layers of misinformation about AI and provide grounded context for thinking about AI.

First, why the title? “The Singularity is Near” is a book from 2005 by Ray Kurzweil, which contends that a technological “singularity” event — where artificial intelligence surpasses human intelligence and becomes smarter at an unfathomable rate, thus supplanting humanity — is almost at hand. It’s an intriguing narrative that makes for a nice story, which we’ve already seen replayed time and again in Science Fiction literature, Hollywood, and Video Games. However, the narrative is purely conjecture at best and a modern day piece of mythology by all reasonable evidence.

What is AI?

AI is a blanket term for statistical inference algorithms, which use enormous mathematical models to conduct a multitude of numerical optimization and statistical aggregation operations. When combined with other technologies this can provide very powerful automation, but notably, that’s the purpose of most technology. “AI” is just another form of automation. Try swapping out the term “AI” for “automation” whenever you see it in the news, in headlines, in corporate marketing, and you’ll soon realize that there’s a lot less novel about “AI” than initially meets the eye. In fact, better yet, try swapping out “AI” with the word “magic”, and you’ll really see the degree to which other people are trying to mislead you.

While philosophically, some of these “AI” models have analogous biological structures like the human brain (e.g., artificial neural networks), practically speaking there is little similarity between them. Case in point, no one knows how the human brain “learns” in any substantive capacity, and only naïve connectivity patterns for correlating data within an artificial neural network can be optimized without a fundamental mathematical breakthrough. Moreover, there is no sign that this is on the horizon or that such a breakthrough is even possible at all. AI models possess no more reasoning capability or emotion than a digital toaster oven.

Why is there so much misinformation about AI?

So why is there so much hype and misinformation about AI? Well, here are a few ideas:

1. The idea of a “singularity” and the idea of truly intelligent machines makes for a compelling story and mythology. The allure of possibility or the fear of impending doom that these machines can cause captures the human imagination.

2. Automation is and always has been fascinating to human beings — even when it’s a parlor trick or not particularly practical. Potentially this is because when it is practical, it saves us a ton of work.

3. Companies and individuals that develop AI have little interest in the hype dissipating and every interest in catalyzing the hype because they’re making money off of it.

4. People want to appear “in the know”, often due to self esteem issues. Just as many people insist with no evidence that they’re spiritually enlightened, many people are pushing hype and misinformation about “artificial intelligence” because they want to associate themselves with something powerful and mysterious. The more powerful and mysterious the better.

5. People in power that are obsessed with control also have their eyes set on AI as a promising technology to control other people, and even where AI under-delivers, the illusion of possessing some sort of soon to be omniscient capability is a powerful control mechanism unto itself.

The Latest AI Hype Cycle

AI has had a multitude of hype cycles over the years, but the latest prolific hype cycle, at least in recent decades, was brought forth with the public release of OpenAI’s ChatGPT chatbot on November 30, 2022. ChatGPT was at the time an impressive exercise in language generation, so good that several people remarked that it passed the Turing Test — which Computer Scientist Alan Turing posited as the test for whether a machine exhibits intelligent behavior on par with that of a human. The idea of the Turing Test (aka the Imitation Game) is that given a human player chatting with a machine on one end and another human on the other, if the machine responses are indistinguishable from that of the human, the machine has passed the Turing test. This is an interesting talking point, until you step back and realize that unlike Turing’s prolific computer science contributions, that were mathematically proven and formalized, the Turing test is an opinion and speculation by one man who, while intelligent, died in 1954, long before any sort of remotely modern computer interface, hardware, or software. What would Turing have made of video games and chess programs, wherein a computer can achieve performance indistinguishable from humans with fairly naïve search algorithms?

ChatGPT plays the imitation game well, and has been revolutionary in some respects, but within the span of nearly a year and a half, the AI reality has fallen dramatically short of the AI hype. Sure, we’ve seen marginal improvements in all kinds of automation technologies, but compared to the hype and hundreds of billions of dollar investments, AI developments have been boring.

Google Search, which many said would be totally supplanted looks similar to how it has always looked, and ostensibly operates similarly to how it always has, with a few extra AI bells and whistles bolted on. Programmers have not been replaced in any substantive capacity. Instead, programmers are using AI systems to help improve their efficiency, and short of being replaced, programmers that develop AI systems are in greater demand than ever before. AI can be great for generating code snippets based on a general description but requires a talented and competent programmer to do much of anything interesting. Similarly, writers are not out of work, but are instead using AI to gather ideas and help with writers’ block.

TL;DR — AI is just another tool!

The Threat of AI

The dialectic of AI as a threat to humanity of its own volition is fake because AI doesn’t have a volition of its own and is nowhere close to having one. Like any tool, however, AI can be misused by powerful people / entities for nefarious means. AI is not out to get you — people might be. Potential for abuses involving censorship, surveillance, spread of propaganda, and massive aggregation of personal data are manifest, but these are behaviors that bad actors engage in anyway. The Stasi did not use AI but they probably would if they were around today. They did, however, employ other technologies that they had at the time to this end.

A secondary threat is the potential for overreliance on AI, as people buy more and more into the notion that AI is actually intelligent and capable of decision making. Putting critical decisions in the “hands” of a tool that isn’t actually capable of thought can have disastrous and tragic consequences. In a broad sense, this is the same issue of misuse that all powerful technology has; e.g., if used correctly, an automobile can be a great thing, but if used incorrectly it can be extremely dangerous.

Finally, a lot of discussion of threats posed by AI involves impact on job markets. Some speculate that AI will lead to a number of unemployed humans. While this remains to be seen, it is worth mentioning that this is again nothing new or unique to AI compared to any other automation technology, and historically speaking automation has not gotten rid of jobs so much as changed the job landscape by improving productivity. The dialectic of AI as a job killer is often based on the false dialectic of AI as “intelligent”. If instead we think about AI as a tool for productivity, this might instead lead to job creation. This largely depends on how people manage the rollout and regulation of AI, for example if only certain actors are allowed to benefit, job markets might be negatively affected due to cartel-like behavior. AI itself is not coming for your job, but corporations, governments, or other entities might be — especially if they’re all mobbed up with one another.

Conclusions

· What we call “Artificial Intelligence” is a powerful tool for automation but labeling it as “intelligent” is a misnomer. It is not intelligent and will not be intelligent any time soon.

· To see past the misinformation about AI, substitute the term “AI” for “automation” whenever you hear it. To see past the hype about AI, substitute the term “AI” for “magic” whenever you hear it.

· The ability of modern AI to pass the Turing test is more of an indicator of how flawed the Turing test is as a benchmark for intelligence than anything else.

· The effects of AI will be gradually transformative, different from the hype, and probably underwhelming in the short term.

· The threats of AI are not intrinsically from AI. They are from humans using AI towards nefarious ends.

--

--