Arunanjali Securities > News > Business > AI – Caging The Stochastic Parrots

AI – Caging The Stochastic Parrots

  • Posted by: Arunanjali Securities
  • Category: Business

While Alan Turing, the “enigma man” of the Second World War, might have provided the logical foundation for the present day ubiquitous computer, it was Arthur C Clarke who in 1968 conceived of the fictitious computer called HAL (Heuristically Programmed Algorithm),  in his celebrated science fiction, 2001: A Space Odyssey. HAL was sentient and could nearly behave like a human. It was the brain that controlled the spaceship. But once its logic got accidentally mutated, no one was able to control it. The book popularized the idea of Artificial Intelligence (AI).  

Fast forward to the present and we have what are called Stochastic Parrots. These are nothing but computer generated models “parroting” back what they have learned from large datasets, but are not capable of true reasoning or understanding. But a lay person like me would be able to appreciate the impact of these “stochastic parrots” when AI tools like ChatGPT or Midjounney are easily accessible and make life easier. But for many businesses these tools could prove to be disruptive. Google which became a digital giant with its disruptive technologies like the search engine, now feels, itself threatened by ChatGPT. So what is this new beast on the prowl? Many schools and educational institutions are discouraging or even forbidding students from using ChatGpt, though there are those who say that ChatGPT can aid and enrich the work of those who engage in nonproctored project or research work. But before getting submerged in AI mumbo jumbo, it would be useful to understand what ChatGPT is in layman’s language. GPT, that is Generative Pertained Transformer is a type of Large Language Model (LLM) that has been trained to generate humanlike text. It is called “generative” because it is able to generate new text based on the input it receives and “pertained” because it is trained on a large amount text data beforehand. The “transformer” part of the name refers to the type of architecture used by the model. GPT-3 ( GPT-4 is the latest version) has many uses and has the potential to disrupt many businesses. It can generate text structures and even programming code automatically. Copilot, which is powered by GPT-3 is used by developers to write code. It is estimated that it generates 40% of the new code written. It can eliminate the need for human intervention as it can run more efficient and interactive chatbots to deal with customers. Experts predict that with fintech revolution underway, converting regulation into algorithms could be the road ahead for financial regulators. GPT could facilitate this transformation rendering regulatory bureaucracies lean and yet more efficient.

The history of finance is replete with celebrated names that have gone extinct because of disruption caused by new technologies, like Kodak, Poloroid Corporation and closer home, Moser Baer. There are any number of jobs and activities like content creators, copyright holders, translators, travel agents et al, that are adversely affected by digital technologies. Apart from questions of ethics, data privacy and justice, experts are now throwing light on the likely disturbing consequences of AI for mankind’s future. Yuval Noah Harari, historian and philosopher, paints an apocalyptic scenario where application of AI without guardrails could “hack” the Operating System of human civilization (Cf: Economist, April 28, 2023). The dangers to mankind and civilization, according to many, are real when self replicating, sentient computers could take over and run their own operations, as the well known astrophysicist Stephen Hawking feared. Most countries have established certain protocols when it comes to cloning or the application of genetic editing using CRISPR, to produce say, designer babies or, exotic species. Harari argues that just as a pharmaceutical company cannot release new drugs before testing for both their short-term and long-term side-effects, so also tech companies shouldn’t release new AI tools before they are made safe. We need an equivalent of the Food and Drug Administration for the new technology. But there is equally convincing body of opinion that argues against such controls stating that the so called threats are imaginary or grossly exaggerated. History tells us repeatedly that every time the world of commerce and industry faces an innovative or disruptive technology, it has been opposed, not only by the luddites but entrenched industries. In 1840, Frederick Bastiat, a French economist well known for his wit, put out a satirical petition to the Chamber of Deputies, calling for a law to shut out the sun so that the candle makers of France could survive and prosper! A la Bastiat there are thinkers who plead for free play of ideas and innovations. And there are experts in the AI field who think that it is doubtful that an artificial mind will ever say like Descartes, “I think, therefore I am” (cogito, ergo sum), except by plagiarizing the French philosopher.

Nevertheless, public opinion (driven by fear of the unknown or analytical prediction) is veering round to the need of putting in place some guardrails before deploying AI tools for use by society at large. The question is who will regulate AI? Regulation at the national level will have its own challenges. For instance, US and China will seek to dominate AI outcomes. While US is likely to bat for private interests, China would most certainly bat for interests of the CCP. Hence a globally representative body like the UN could constitute a Commission comprising well known scientists of international stature (certainly not of career bureaucrats), which could set its own terms of reference to evaluate the risks posed by sentient and self replicating AI systems.

Author: Arunanjali Securities