ChatGPT can be called a « blockbuster » example of how AI is revolutionizing our world.
Yet a blockbuster--like everything else on our little blue planet--has an end.
Generative AI is a part of deep learning models called a large language model (LLM).
It can generate human-like text based on any written prompt.
It is a foundation model that is pre-trained and versatile, based on what humans trained the machine for to apply diverse tasks (like answering questions, summarizing documents, and writing emails).
Generative AI models are today challenged:
Will these models be used for dark uses?
What more can we expect from these models?
As the confusion reigns on what general intelligence really means, who decides what?
The evolution of LLMs has been driven primarily by two significant advancements:
First, the advent of the large language models, or LLMs, was underpinned by a scientific development known as the transformer model, discovered by Google researchers in 2017*.
Second, the breakthrough nature of the transformer models is their ability to be trained on large-scale data sets (representing approximately 1% of the internet). It became possible thanks to the technologcial advancements in hardware (GPU), even thought they remain expensive.
* In 2017, transformer technology was introduced, which greatly improved LLMs' ability to process and comprehend the context of words within lengthy text passages. Previous models had difficulty capturing long-range dependencies.
😲 A trillion here, a trillion there. Soon it’s real money on Exponential View: https://www.exponentialview.co/p/a-trillion-here-a-trillion-there?utm_medium=email
Generative AI exists because of the transformer, on Financial Times:
https://ig.ft.com/generative-ai/