Image

The “Intelligence" Of A.I

Disclaimer:

The views expressed in this article are those of the author alone and do not necessarily represent the views or policies of the company.

In the past few years, A.I. has quickly taken the world by a storm. It seems like it appeared over-night and made millions dependent upon it for even the smallest tasks—from being a personal therapist to being a business advisor; people turn to A.I. in almost every aspect of life. But how intelligent is A.I. really, and is it actually helpful? Artificial Intelligence has been around for decades, dating back to the 1950s, when Alan Turing proposed the “Turing Test” to test the intelligence of computers. In 1955, John McCarthy coined the term “Artificial Intelligence” at a workshop at Dartmouth. Since then, the term A.I. has been widely used and thrown around to describe intelligent computing systems until they are no longer deemed intelligent by evolving standards.

Some say A.I. is whatever hasn’t been done yet, and it’s quite a compelling argument: when a technology is new and has double-edged effects, we’re more likely to call it A.I. Once the “A.I.” starts running reliably, the benchmark for what constitutes as A.I. changes. Take one of the first A.I. to be developed as an example—the checkers program by Arthur Samuels in 1952. There was a time when a computer beating a world champion at chess was considered A.I., and when it was achieved, it no longer qualified. Text to speech was considered A.I. too, until it wasn’t. Same with image recognition and mundane programs built into our phones that we take for granted, such as Siri. At one point, these would be considered A.I., but because intelligence is ill-defined, the goalposts keep changing, and that will likely be the case in the future as these systems keep advancing, and human expectations continue to grow.

So, is A.I. really Artificial or Intelligent? Early models of A.I. were run by rules and programs without having the “humanized” element. Today, programs like ChatGpt draw their strength from the work of humans: writers, artists, programmers; this throws the “artificial” element down the drain. As for their intelligence, they are merely statistical machines that regurgitate patterns mined from oceans of human data. Modern A.I. is basically a sophisticated pattern-matching system presented to you in the most relatable form. When it gives you an answer, it is quite literally just guessing which word will come next in sequence, based on the data it’s been trained on. At best, it is an encyclopedia. It speaks in a polished tone, expresses curiosity, claims to have compassion and creativity, but the real story is very different and alarming.


A “real thinking” A.I. is likely impossible because human intelligence is not one-dimensional; our thought process is heavily dictated by emotions, often trumping logic. Machines can have knowledge of the past, but not sense of the past, present, or future. They can’t feel things like nostalgia, hunger, desire, or fear, so there will always be a gap between the data it consumes-data born out of real human emotions and experiences-and what it can do with it. Since consciousness is brought about by the integration of mental states with sensory representation, such as changes in heart rate, temperatures, and so much more, there will probably always be a disconnect between A.I., a machine, and consciousness, a phenomenon only existing in living beings. Thus, machines remain trapped in the singular formal logic.

The reason ChatGpt, or any modern general A.I., can be creative is because there are humans behind it after all, and that is part of the problem. Generally, we would not consider entrusting our deepest secrets, turmoil, and life decisions to a random computer programmer sitting halfway across the world— but that is exactly what people are doing. A cross-dressing A.I. playing human triggers anthropomorphic reflexes, where humans start relating to the machine and even becoming emotionally dependent upon it, but the A.I. has no idea what it means to be human; it can’t figure out hidden motives, predict suffering, and worst of all, it has no goals, ethics, or moral compass of its own unless injected into its code by the programmer. So, real control of the machine and how it responds lies with its master-the programmer, the company, the government.

While this view seems harsh, it is the reality that many seem blind to. Is A.I. useful? Very much so, we can translate, visualize, summarize, debug, code, and analyze data much faster, but it is a tool, not the replacement of humans. And, like other tools humans have created from axes to the internet, it can and will be used as a weapon. There is a serious danger if A.I. continues to be given a false human form, and this threat needs to be realized so that more ethical models are created. It needs to be stripped of its human mask, but this is unlikely to happen, seeing the rate at which big tech firms seek to humanize A.I models further.