The Natural Fear of the Unknown
February 1, 2024
Kam Zarrabi

Even though the spirit of adventure in search of the new and the unknown has been the driving force behind all the physical and cultural developments in human civilizations, the instinctive suspicion and fear of anything truly new and unknown has also helped save us against the potential dangers of the new and the unknown. Observing the cautious approach of animals in the wild to any unfamiliar object is a testament to the survival advantage of this natural instinct in which we humans also share.

We could name some of these discoveries that have changed the course of human civilizations throughout the ages in various degrees; among them, the advent of the wheel, bow and arrow, gunpowder, cure for the microbial causes of diseases, electricity, internal combustion engine, nuclear power, computers, the internet, and now artificial intelligence.

All such discoveries had potential for both good and evil purposes. The wheel facilitated transportation of good and bad people from place to place and, at the same time, enabled the creation of chariots designed as war machines. Bow and arrow was an efficient tool for hunting, as well as for killing fellow humans. Nuclear power has paved the way for creating clean energy free of fossil fuel pollutants, and became the ultimate weapon of war.

Now it is the AI, artificial intelligence, which has become the subject of concern, as well as trepidation, as a potential tool for the advancement of human knowledge and capabilities, and also, in the wrong hands, as a potent means of inflicting catastrophic harm to human civilization. This is how nuclear energy was viewed near the end of the Second World War, and for very good reasons. The potential dangers of another global war, which might involve the widespread use of nuclear bombs, are still with us, regardless of all the efforts to safeguard against the use of such weapons in the inevitable future international conflicts.

But, what is AI we hear so much about, which to the uninitiated, meaning most of us, is as confusing and mindboggling as the new developments in quantum computer technology?

In a bare-naked uncomplicated sense, we are talking about an accumulation of statistical data, plus an algorithm to draw reasonable conclusions from the data. For example, statistically, when the air temperature is "x" and the percentage humidity level is "y", and there is a further cooling trend, chances of precipitation are such and such percent. Adding to those statistics other factors, such as the type of cloud cover, wind direction, ground elevation, plus historical data about similar conditions, the percentage accuracy of our prediction approaches near certainty. This is not rocket science!

Similarly, statistically speaking, if a person has a fever, stuffy nose, aching joints and muscles, a throbbing headache, coughing and a sore throat, chances are better than 90% that we are dealing with a case of flu. But that data might also mean a 10% chance of other complications beside a common, seasonal flu. Further statistical data about symptoms and signs, plus data about similar documented cases, would narrow the diagnoses down to a reasonably correct conclusion, resulting in the, again, statistically shown best approach to cure the problem.

At this level of complexity, any well-equipped medical center is capable of diagnosing and treating a patient with a high degree of efficiency. But there are cases where the contributing causes to the disorder are much more complicated, which might require X rays, MRI, CT Scans and various other tests, the patient's medical history, etc., to pin down the cause of the problem and to come up with an appropriate treatment formula. In other words, the more information, some seemingly at first glance not pertinent or unrelated to the case, that could be accessed, the better the chances of finding some relevant path to the correct answer.

This example showed just the tip of the iceberg. In other fields, such as politics, economic trajectories, natural resources management, climate change, human populations' dynamics, and many other areas, accumulating sufficient statistical data for proper analysis and to come up with the most likely predictions is a monumental task beyond imagination. Not only all the known or knowable contributing factors to any case could be immeasurably numerous, a broader statistical analysis could introduce incalculable number of other factors that might appear irrelevant and purely coincidental at first, but might actually be of some bearing, albeit tangential, on a case.

This is where the AI technology comes into play. The system could be fed billions or trillions bits of data to consume, digest and sort out. Then comes the algorithm or the prompting formula to direct the AI to cough up the desired information. There might be near infinite number of conditions or circumstances coinciding with any event, most of which having nothing to do with that event. But if the AI's statistical information indicates that some of these circumstantial conditions, no matter how seemingly irrelevant, have contributed to similar outcomes in parallel cases, then the irrelevant becomes relevant enough to validate the conclusion. This kind of capacity to store and access information on demand is far beyond the capabilities of a human brain or intellect, but within the domain of artificial intelligence.

The other concern we have regarding the AI is with its Large Language Models of various sorts, such as the ChatGBT, Bing AI, and Google Bard, where the systems are fed with ever increasing numbers of textual lines from available literature in as many fields as the programmers can handle.

This kind of data creates a pattern in the common usage of verbiage, grammar and syntax, which makes it possible for the AI to predict what word or phrase is most likely to follow a given line of text. In responding to a question about a certain subject, for example, the AI uses the key words from the question, searches and collects the information from its encyclopedic resources, sorts them out and formulates its response in a language that could appear to be a genuine, original article. In actual fact, however, the information collected by the AI is not the result of some original research on the subject matter by the AI, but is copied or plagiarized from available literature, hidden from the eyes of the reader of the response. This, of course, can create major legal copyright issues when the source materials are not cited or permissions for their use not obtained.

Through a continuing training and learning process, these LLMs can gradually create ever better textual materials that mimic human writings in the form of articles, term papers and research documents. They can even avoid the plagiarization charges by rewording the contexts of the texts and employ other evasive language tools to fool the average checkers.

The Large Language Models are fantastic learning tools that can assist teachers, students, researchers and the like in positive, productive ways. They can answer questions within seconds, which would require weeks or months by an individual to find. Much of the AI's ability to respond comprehensively or correctly depends on the ability of the person asking the question in phrasing the request for information.

I prompted the ChatGPT4 version to give me the "gist" of my own latest publication, Necessary Illusions, since I knew the right answer as the author of that book. The response was almost immediate, literally within seconds. In that blink of an eye, the AI had read through the entire book and had reached its conclusion. I found that what the AI had concluded was less than totally accurate and far short of complete. But, instead of blaming the "machine" for that shortcoming, I blamed the book for my own somewhat confusing, to the AI at least, convoluted approach to the main topic. A human reader would have had little problem summarizing the main "gist" of the book in one sentence; i.e.: As an effective survival tool, the human mind creates illusory imageries and beliefs to resolve perplexing enigmas of its existence. I do believe, however, that in time the LLMs will gain the ability to overcome the current shortcomings, and possibly do even better than an average human would!

These Large Language Models are still growing in complexity, and their performance should, in a Turing Test, become indistinguishable from that of humans, while at an unimaginably faster speed!

There is no question that artificial intelligence technologies have the potential for their misuse in the hands of troublemakers by creating, for example, false or damaging narratives, especially in the electronic media accessible to hundreds of millions of smart phone, tablet or computer users, which means most of us. Meanwhile, robotic technologies have already shown how a widespread use of electronically programmed and guided gadgets could replace human workers in cost saving, more productive ways. This will mean retraining millions of people whose jobs might be replaced by machines that do not require pay increases, medical or retirement benefits!

Hopefully, knowing what makes the AI tick, statistical info plus operational algorithms, takes some of the mysteries out of this fast evolving technology.

However, as I mentioned above, the same problems existed in the advent of other new and novel discoveries, some of which continue to be of universal concern for humanity to this day. But, would humanity have done better without, say, computer technology or the internet? I know many people who'd say yes!