📨 Weekly digest: 10 2025 | The algorithmic gaze: Why AI misses the human story
Beyond the data: AI's human problem | AI this week in the news; use cases; tools for the techies
👋🏻 Hello legends, and welcome to the weekly digest, week 10 of 2025.
This International Women's Day, we stand at a pivotal moment, a juncture where the rapid advancement of AI intersects with the enduring power of human understanding.
As we celebrate women's achievements and resilience throughout history, let us also reflect on the evolving landscape of technology and its impact on our lives.
With its promise of unparalleled efficiency and data-driven insights, AI has undeniably reshaped our world.
Yet, within its intricate algorithms and vast computational power lies a fundamental limitation: the absence of the human element.
The essence of what makes us human—our empathy, intuition, and the ability to grasp the 'why' behind actions—remains beyond the grasp of even the most sophisticated AI.
Today, we must confront this gap not with fear but with a renewed commitment to harnessing the synergy between AI's analytical prowess and the irreplaceable depth of human comprehension. For it is in this union, in bridging this divide, that we find the true potential to create a future where technology serves humanity, empowering us to build a more equitable and insightful world.
AI has already transformed our world, promising unprecedented efficiency and insights. But as we increasingly rely on algorithms to make decisions, it's crucial to acknowledge their inherent limitations.
AI excels at processing data, identifying patterns, and optimizing processes. Still, it fundamentally lacks the human element—the context, intention, and why—that drives our actions and shapes our world.
We too often forget the belief in the power of combining AI with human understanding and why that combination is so essential.
Let's dissect some key areas where AI's algorithmic gaze falls short:
1. The rearview mirror of the future:
AI thrives on historical data, making it excellent at predicting trends based on the past. But the future isn't a simple extrapolation of what's already happened.
Black swan events, disruptive innovations, and shifts in human behavior can throw even the most sophisticated AI predictions off course.
Think of the stock market.
AI can analyze past performance but cannot foresee a sudden geopolitical crisis or a groundbreaking technological advancement that will reshape the landscape.
2. Patterns without purpose:
AI can identify correlations and surface trends but struggles to understand the underlying motivations. For example, a recommendation engine might suggest a product based on your past purchases, but it doesn't grasp why you bought those items in the first place.
Did you need them?
Did you want them? Were they a gift?
The "why" is crucial, something AI often misses.
3. Data trails, not human tales:
AI analyzes clicks, views, and transactions but misses the emotional context.
A customer abandoning a shopping cart might be flagged as a negative signal, but the reason could be anything from a crying baby to a sudden change in budget.
The richness of human experience is reduced to data points, losing the nuanced story behind the actions.
4. Compliance vs. commitment:
AI can track whether employees follow procedures, but it can't measure their genuine engagement and dedication. An employee might tick all the boxes for a training program without truly absorbing the material or feeling connected to the company's mission.
True commitment comes from within, something AI can't quantify.
5. Keywords vs. comprehension:
AI can process language but doesn't truly understand meaning like humans do. A chatbot might respond to your query with relevant keywords, but it might miss the subtle nuances of your question or the emotional tone behind it.
Context and intent are often lost in translation.
6. The digital shadow vs. the authentic self:
AI can build a profile of you based on your online activity, but it's just a fragmented representation of your complex and multifaceted personality.
Your online persona is a curated version of yourself, not the whole story.
AI risks mistaking the digital shadow for the real person.
7. Metrics vs. meaning:
AI can track message response times, calendar events, and project milestones, but it can't grasp the meaning behind these metrics.
A quick response doesn't necessarily equate to genuine connection, and a packed calendar doesn't always reflect a fulfilling life.
AI can measure activity, but it struggles to assess its true value.
8. Implementation vs. inspiration:
AI can analyze the final product but can't understand the creative process or the countless ideas explored and discarded along the way.
The journey of innovation is often messy and unpredictable, driven by human intuition and inspiration – qualities that AI struggles to replicate.
The human advantage:
As humans, we have the ability to recognize these limitations.
We ought to believe that the true power of AI lies in its ability to augment, not replace, human intelligence.
Combining AI's analytical capabilities with human understanding, empathy, and creativity can unlock a new level of insight and innovation.
It's the key to building a future where technology serves humanity, not the other way around.
What do you think?
I am looking forward to reading your thoughts in a comment.
Yael.
This week’s Wild Pod episode
Yael on AI:
Sharing personal views and opinions on global advancements in AI from a decision leader perspective.
🦾 AI elsewhere on the interweb
Machine learning can now predict correlations between personality questions better than academic psychologists on Nature. [LINK]
"The results of this study highlight several potential practical implications for both psychometric research and applied fields such as human resources, healthcare, and marketing. Specialised AI models like PersonalityMap might be able to greatly expedite the research process by reliably predicting personality trait correlations, facilitating faster hypothesis testing and scale development at a lower cost. AI might also be able to also assist in automating personality assessments, reducing the need for expert input in contexts such as hiring and diagnostics."Introducing an advanced Articulate Medical Intelligence Explorer (AMIE), which goes beyond diagnosis towards treating and managing disease over time by Google Research. [LINK]
In a randomized study, AMIE matched or exceeded clinician performance over multi-visit consultations with professional patient actors, including precisely planning investigations, treatments and prescriptions, and appropriately using trusted clinical guidelines.
After trailing two weeks ago, OpenAI launched ‘GPT4.5’, its last model not to use the new ‘chain of thought’ (AKA reasoning) approach seen in o1 and o3, which most people in the field think is probably the path forward. This model seems to be somewhat better than the preceding 4o, though not dramatically so, but doesn’t match o1 and o3, and is 15-30x more expensive to run than 4o, which puzzled a lot of people. [LINK]
Meanwhile, Anthropic released its own new model, Claude 3.7 Sonnet, with a hybrid approach that can switch back and forth between one-shot answers and chain-of-thought’. [LINK]
Fast access to our weekly posts
📌 AI case study: "Are you ready for Universal AI Connectivity?"
🎲 AI hallucinations and ungrounded responses
🎯 How to build with AI agents | The future of work: Preparing your workforce for the age of AI
📮 Maildrop 05.03.25: LLMs, 2025 trends: Efficiency and scalability, part 1/3
🚀 AI proofing: Ensuring AI systems do what they're supposed to [Week 4]
🚨❓Poll: What is the true threat to national security in the context of AI development?
Previous digest
📨 Weekly digest
Thank you for being a subscriber and for your ongoing support.
If you haven’t already, consider becoming a paying subscriber and joining our growing community.
To support this work for free, consider “liking” this post by tapping the heart icon, sharing it on social media, and/or forwarding it to a friend.
Every little bit helps!