🎲 Navigating the labyrinth: AI ethics in a profit-driven world
AI data and trends for business leaders | AI systems series
How do we navigate the ethical minefield of AI development and implementation?
Should a dedicated team solely be responsible, or should ethical considerations permeate the company culture?
This debate delves into the merits and drawbacks of both approaches, ultimately advocating for a strategic blend.
AI's actual value lies in its ability to analyze data, identify patterns, and make predictions. But how does it work with ethics?
To avoid privacy concerns, a company might prioritize responsible data collection over faster data gathering.
An AI development team might choose a slightly less efficient but more transparent algorithm to ensure fairness in decision-making.
Companies prioritizing ethics and expediency are well-positioned for long-term success in a world where consumers and stakeholders increasingly value responsible business practices.
The relentless march of AI presents a thrilling paradox for large companies. It promises revolutionary advancements across industries but also ushers in a complex ethical landscape1.
Can companies be ethical while expedient?
Let’s try to better understand through facts.
📌 Insight 1: The myth of the objective algorithm
Algorithmic bias isn't just about bad data. Even well-designed algorithms can perpetuate societal inequalities.
A 2016 study by ProPublica2 (that is still very true) found that a widely used risk assessment tool in the US criminal justice system disproportionately flagged black defendants for higher risk, despite claims of fairness from the developers.
▸ Why it's contrarian? It challenges the common belief that a dedicated AI ethics team can simply ensure unbiased AI through data scrubbing.
It highlights the need for a deeper understanding of how algorithms can inherit societal biases.
📌 Insight 2: The quantified black box
Obsessing over transparency in complex AI systems might be a dead end. Focusing on fairness and explainability in outcomes might be more productive.
A 2022 McKinsey report3 found that only 20% of companies surveyed felt they had a very high level of understanding of the inner workings of their AI models.
▸ Why it's contrarian: This challenges the idea that a company culture focused on understanding every step of an AI system is necessary.
It suggests that focusing on demonstrably fair outcomes might be a more achievable and impactful approach.
📌 Insight 3: The ethical arms race
Overly stringent internal AI ethics might stifle innovation and create a competitive disadvantage. Companies might be incentivized to relocate development to regions with looser regulations.
A 2023 study by the Center for Security and Emerging Technology [CSET] at Georgetown University4 found that 70% of AI experts surveyed believe the global competition for AI dominance could lead to an "ethical race to the bottom."
▸ Why it's contrarian: This challenges the focus on purely internal solutions.
It highlights the need for international collaboration on AI ethics to prevent a race to the bottom and ensure a level playing field for responsible AI development.
📌 What’s next and considerations
Some argue that the very notion of a dedicated AI ethics team within a profit-driven corporation is a band-aid solution. They see it as a way for companies to appear ethical while prioritizing profits and innovation. True ethical AI development, they claim, requires a radical shift in corporate priorities.
This radical approach suggests dismantling the dedicated AI ethics team altogether. Instead, resources and decision-making power should be placed in the hands of a diverse, independent ethics council composed of external experts, community representatives, and even users potentially impacted by the AI.
This council would have veto power over AI projects deemed unethical, fundamentally changing the power dynamics within the company.
However, such a radical approach has its own set of risks. Companies might become hesitant to invest in AI development for fear of the council's veto.
A survey commissioned by the Markkula Center for Applied Ethics and ITEC 5, and conducted online via Pollfish on November 7, 2023: “Two thirds of respondents are concerned about the impact of AI on the human race.”
Innovation could stifle, and valuable advancements could be lost to competitors with less stringent ethical oversight.
Additionally, navigating the complex interests of such a diverse council could lead to paralysis and hinder timely decision-making.
This controversial conclusion throws a wrench into the debate:
Is a dedicated team or company culture enough, or do we need a more fundamental restructuring of corporate power in the age of AI?
The answer, as with many things in AI ethics, remains a work in progress.
Continue exploring
🎲 Data and trends
This email is sent to you because you signed up for Wild Intelligence by Yael Rozencwajg. Thank you for your interest in our newsletter!
Data and trends are part of Wild Intelligence, as well as its approaches and strategies.
We share tips to help you lead, launch, and grow your sustainable enterprise.
Become a premium member, and get our tools to start building your AI-based enterprise.
Following the announcement by Microsoft to reportedly fires DEI team — becoming latest company to ditch ‘woke’ policy, new questions arise.
https://nypost.com/2024/07/17/business/microsoft-fires-dei-team-becoming-latest-company-to-ditch-woke-policy-report/