📨 Weekly digest: 21 2024 | AI: we're moving from 'just' knowing... to knowing AND doing
What does AI understand and know? | AI this week in the news; use cases; for the techies
Hello friends, and welcome to the weekly digest, week 21 of 2024.
We are witnessing the dawn of a significant shift in the field of AI.
Traditionally, AI has focused on building intelligent systems that can analyze data, recognize patterns, and make predictions. This is the "knowing" part.
The exciting development is the move towards AI that can understand the world, interact with it, and perform actions. This is the "doing" part.
This opens doors for AI to be more than just an information processor but a true collaborator and problem-solver.
AI that can grasp information and use it to take action.
Large Language Models (LLMs) are like super-powered language experts, but Retrieval-Augmented Generation (RAG) models take it a step further.
RAGs act like LLMs with a built-in research assistant, constantly seeking the latest data to inform their responses. This "knowing and doing" approach is a game-changer for businesses. From automating tasks to crafting innovative tools, these AI advancements can potentially solve real-world problems and unlock new opportunities, but they also represent threats.
Let's delve deeper into the "knowing and doing" of AI with a focus on Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) models.
Large Language Models (LLMs):
These are AI systems trained on massive amounts of text data.
Think of them as digital omnivores, consuming and learning from books, articles, code, and other text forms.
This vast knowledge allows them to perform various tasks, such as writing different kinds of creative content, translating languages, and answering your questions in an informative way (like I'm doing now!).
However, LLMs primarily focus on the "knowing" aspect. They excel at understanding and responding to text prompts but may struggle with taking action directly in the real world.
Retrieval-Augmented Generation (RAG):
This is a newer approach that builds upon LLMs.
Imagine a RAG model as an LLM with a built-in librarian.
Along with its own knowledge, it can access and retrieve relevant information from external sources when needed.
This "retrieval" aspect empowers RAG models to go beyond just "knowing" and bridge the gap to "doing."
For example, if you ask a RAG model to write a blog post on "fixing a leaky faucet," it can not only use its knowledge of language and plumbing information but also access and process online tutorials or manuals to provide a more comprehensive and actionable guide.
Schema for understanding LLM and RAG:
Here's a schema to illustrate how these models work:
In this schema:
The user provides a prompt or question1.
The LLM uses its internal knowledge base to understand the prompt and generate a response.
A RAG model can take this a step further. It can access external resources like manuals or tutorials (represented by the librarian) to supplement its knowledge and potentially provide a response that includes actionable instructions.
Many new questions arise, and it is essential to understand how to use this powerful tool.
LLMs and RAG models represent a significant step towards AI that can not only understand the world but also interact with it in meaningful ways. This "knowing and doing" combination holds immense potential for various applications, from automating tasks to creating new tools and solving complex problems.
We will explore LLMs and RAG models next week. Stay tuned.
What do you think? AI: we're moving from 'just' knowing... to knowing AND doing.
If you haven't already, you can start with our new series: AI dystopia series | The genesis: a flawed utopia:
I am looking forward to reading your thoughts in a comment.
Happy days,
Yael et al.
🦾 AI elsewhere on the interweb
With expanded AI Overviews, more planning and research capabilities, and AI-organized search results, our custom Gemini model can take the legwork out of searching. Generative AI in Search: Let Google do the searching for you
In case this wasn’t obvious - Onlyfans stars outsource the text chat to boiler-rooms in low-cost countries. (Meanwhile the same kind of chat is a big early use case for LLMs.) Your online influencer girlfriend is actually a rotating cast of low-wage workers. I became one of them.
Fast access to our weekly posts
📮 Maildrop
🚨❓ Big question
📨 Weekly digest
You are receiving this email because you signed up for Sustainability Insights by Yael Rozencwajg. Thank you for being so interested in our newsletter!
Weekly digests are part of Sustainability Insights, approaches, and strategies.
We share tips to help you lead, launch, and grow your sustainable enterprise.
Become a premium member, and get our tools to start building your AI-based- enterprise.
Not a premium?
Thank you for being a subscriber and for your ongoing support.
If you haven’t already, consider becoming a paying subscriber and joining our growing community.
To support this work for free, consider “liking” this post by tapping the heart icon, sharing it on social media, and/or forwarding it to a friend.
Every little bit helps!
Safeguarding artificial intelligence with security and trustworthiness is becoming increasingly complex, particularly with the growth of remote work. The enterprise network has effectively become much larger, more dispersed, and more difficult to secure. This is why it is important to articulate informed questions: https://wildintelligence.substack.com/p/is-a-wrong-ai-decision-mistake-bad-choice-error.