An exploration of the new artificial intelligence application that has been taking the world by storm.
By Prince Addo
Photo Credit: DreamStudio.ai
Stable Diffusion, a 2022 text-to-image model, generated a ChatGPT logo, the latest AI phenomenon.
On November 30, 2022, OpenAI, a Silicon Valley-based company, released their new transformer-based deep neural network model called GPT-3.5, and, almost as an aside, a web application called ChatGPT.
Partly meant to showcase the power of GPT-3.5, the web application was a massive success. In five days it surpassed 1 million users and in 2 months it surpassed 100 million users—a feat that took Instagram 2 years to reach.
These events are eerily similar to those that transpire in Max Tegmark’s bestselling sci-fi book, “Life 3.0.”
The story depicts a division in a corporation called the Omega Team, which was tasked with building general artificial intelligence. To accomplish this task, they used British mathematician Irving Good’s idea of building an intelligent machine that built intelligent machines, which in turn build intelligent machines—a process that recursively happens until the machine has super intelligence.
The machine they built was called Prometheus and, to their surprise, it achieved super intelligence in about one week. Within the span of a few years, the Omega Team had control of the entire world.
ChatGPT is far from the Omega Team’s Prometheus, but nevertheless, this simple model has taken the world by storm.
“It’s all anyone wants to talk about,” said one chief executive officer in reference to ChatGPT, at the Davos World Economics Forum.
ChatGPT is a transformer-based model that processes input and produces output from what it has previously learned. What makes ChatGPT so impressive is not the underlying technology, which is relatively easily reproducible—it is the enormous amount of data that was used to train the model.
In classical programming, a programmer enters instructions and data for the computer to process; the computer then processes that data using the instructions and produces a result.
This process works fine and has built most of the modern infrastructure, but the problem is that classical programming is not able to handle complex and fuzzy problems, like computer vision and natural language, because those tasks are hard to define and computers require precise definitions.
On the other hand, machine learning, which is a subset of AI (Artificial Intelligence), flips this concept around. You give the model data and results, and it produces the instructions—this is called training.
These instructions are not completely accurate because the process of generating them relies on probability, but they are good enough. Using the produced instructions, a programmer is able to input new data and probably get the correct answers.
ChatGPT is a great advancement in AI, but it suffers from the same problem as all machine learning models—it is compulsively wrong. The tendency for ChatGPT to produce incorrect answers has been dubbed its “hallucinations” due to the confidence it has when it produces the incorrect answers.
In an article from his blog, The Honest Broker, Ted Gioia compares ChatGPT to the protagonist in the television show “Sneaky Pete”, who famously states, “I’m not a con man…I’m a confidence man…I give people confidence, they give me money.”
“The confidence game is a real art—more than just lying,” said Gioia. “A con job requires something grander, a fast talking sureness that always seems to be right even when it is dead wrong. If you’re caught in a lie, you just build a bigger lie to hide it.”
This comparison is a bit of exaggeration, but Gioia is not completely off.
ChatGPT has no understanding of the truth; it is only aware of what could be and not necessarily what is. It generates its responses based on probability and information, not absolute truth.
A lot of effort has gone to finding the limits of ChatGPT. A study was conducted by University of Pennsylvania’s Wharton professor Christian Terwiesch on how ChatGPT would perform in a core MBA course called operation management; ChatGPT ended up getting a B to B-.
“I was just overwhelmed by the beauty of the wording—concise, choice of words, structure. It was absolutely brilliant… But the math is so horrible,” said Terwiesch. “The language and intuition are right, but even relatively simple middle-school math it got wrong.”
Another study conducted by David W. Agler tested if ChatGPT was able to pass an introduction to symbolic logic, a fundamental course for higher mathematics, and ChatGPT ended up scoring a D, which was a grade level lower than the class average.
From these, and other studies, it is reasonable to conclude one of ChatGPT’s limitations is its inability to perform logical and mathematical tasks. If you prompt it to give you a list of books on logic, it will give you the best list you’ve ever seen, but prompt it to solve a simple logic puzzle and it will probably give you the wrong answer.
ChatGPT is great at giving general advice and answers, but anything deeper and you’re rolling a dice.
“ChatGPT is like an urbane, overconfident version of Wikipedia or Google,” said John Gapper. “Useful as a starting point but not for complete answers.”
In spite, or maybe because of its limitations, a lot of investment capital is flowing towards generative AI. In 2022, generative AI venture capital reached an astonishing (at that time) $2.1 billion.
At the start of 2023, Microsoft confirmed a $10 billion investment in OpenAI, which single-handedly crushes the entire venture capital investment in 2022. Now, venture capital is dropping its dead dream of Web3.0 and cryptocurrency, and warming up to the new age of AI.
Google is also realizing that they are facing real competition from generative AI. After not making a move for months, watching ChatGPT’s users skyrocket, and claiming their goal is “to be bold and responsible, Google has announced their own AI chatbot called Bard.
Many investors believe that Google has lost too much ground. Their stock price dropped more than 8% on the day of the announcement, wiping out billions in market value.
ChatGPT will not take your job, but it is one step in a long journey to make human work obsolete.
There will surely come a day in the very distant future, assuming we are still alive, when all tasks will and could be easily automated, so that the need for the majority of people to work would be irrational. It is impossible to predict what that would look like, but it will surely look drastically different than our world today.
Leave a Reply