Articles
Privacy guides
AI and data privacy: How to balance innovation and privacy?
Privacy guides
new

AI and data privacy: How to balance innovation and privacy?

Published  

12/4/2023

by 

Thierry Maout

8
min read

Published  

December 4, 2023

by 

Thierry Maout

10 min read
Summary

As artificial intelligence (AI) continues to pick up speed in the technology space and seems to infiltrate the product roadmap of every company out there, concerns around data privacy and how to regulate innovation looms.  

 

In this article, we explore the current context surrounding AI and privacy, the importance of maintaining control over data, how regulators are trying to deal with AI today, and how organizations can use it safely and responsibly, mitigating risks while promoting innovation internally.

 

Watch now: Last August, our Chief Privacy Officer Thomas Adhumeau and our Vice-President of Product Development Jeff Wheeler were joined by Volha Litvinets, Risk Consulting at Ernst & Young and Doctor in AI Ethics & Responsible Tech, for a conversation around data privacy and artificial intelligence (AI).


Click on the recording to watch their discussion and learn more about the state of privacy in AI, the regulations in the works, and some of the risks involved for organizations:
Image displaying the speakers from the webinar, along with the Didomi logo, the title "AI and data privacy: Balancing innovation and Privacy", and a button "Watch the video"

 

Summary

 

 

The artificial intelligence landscape today

 

To start our webinar on AI and Data Privacy, Volha Litvinets, Risk Consulting at Ernst & Young, and Doctor in AI Ethics & Responsible Tech, highlights that defining Artificial Intelligence (AI) is a difficult task in itself. 

 

To bring some clarity and scope to the conversation, she offers the definition laid out by the Organisation for Economic Co-operation and Development (OECD), which has been used for the European Union’s AI Act (more on that later):

 

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

- OECD, AI Principles overview (Source: OECE.AI)

 

One element that makes it difficult to define Artificial Intelligence is the number of technologies that could fall under the moniker.

 

Indeed, AI isn’t a new trend and has been around for decades, since the 1950s Darmouth workshop all the way to machine learning and, more recently, generative AI technologies. 

 

26 - AI and privacy - Body 1

 

Despite this long history, artificial intelligence is seeing a recent rise in popularity since the technology is more accessible than ever and increasingly embedded into our everyday lives:

 

  • Half of U.S. mobile users use voice search every day (i.e., Alexa, Siri) 
  • ChatGPT had 1 million users within the first five days of being available 
  • Over 75% of consumers are concerned about misinformation from AI 
  • The AI market size is expected to reach $407 billion by 2027

 

As interest in AI continues to grow and practical applications become more widespread, concerns over the data privacy implications of these technologies have started to arise. 

 

Artificial intelligence and data privacy

 

Artificial Intelligence technology has been raising flags for several reasons, from unprecedented data collection for training purposes to suspicion of surveillance and monitoring, all the way to the spread of bias and discrimination and an overall lack of transparency.

 

Asked about these issues, Volha Litvinets highlights the fact that while valid and essential, these concerns are not inherently related to AI but reflective of the shortcomings of technology and society as a whole:

 

When we delegate this responsibility to AI, we must remember to take our own responsibility as a society for the tools that we are creating. Biases, for example, come from us, the people who are developing the algorithms and writing the code. This is basically a mirror of society (...) The question is what kind of inequalities we reproduce and what kind of new inequalities we are creating here.

- Volha Litvinets, Risk Consulting at Ernst & Young and Doctor in AI Ethics & Responsible Tech (source: Didomi webinar)

 

Experts across the board insist on having humans in the loop to mitigate incidents, such as when New York lawyers were sanctioned for using ChatGPT when redacting a legal brief, having mistakenly included fictitious case references the generative AI tool had simply invented.

 

AI-related headlines have been a fixture in the news with, specifically, recurrent pushback on generative AI tools from regulators: ChatGPT was temporarily banned from Italy after being accused of collecting personal information without legal basis by The Garante (the Italian DPA), and Google Bard was postponed in Europe for months upon failure from Google to answer privacy concerns from the Irish Data Protection Commission.

 

However, Clearview AI might be the most potent illustration of when AI goes wrong and regulators must get involved. 

 

26 - AI and privacy - Body 2

 

The facial recognition company has reportedly scraped the internet to collect over 20 billion facial images and provided them to U.S. law enforcement agencies, which have conducted over a million searches using the software. 

 

Despite being fined 60+ million euros by European lawmakers from the United Kingdom, France, Greece, and Italy, the company refuses to acknowledge its responsibility. It continues to operate at the time of writing this article.

 

“This case is particularly incredible because not only was there a compliance issue with GDPR, but Clearview.ai didn't even bother replying to the CNIL. It is a bad precedent, and it shows why it's so important to regulate AI quickly.”

- Thomas Adhumeau, Chief Privacy Office at Didomi (source: Didomi webinar)

 

To learn more about Clearview, check out this fascinating opinion piece by Sarah Barker, a former UK lawyer producing specialist content for law firms, professional bodies, and digital agencies:

 

{{the-case-for-international-cooperation-on-ai}}

 

What are the current efforts to regulate AI?

Efforts are emerging around the world to regulate AI and bring some clarity to the field. Among the most critical initiatives, three that are worth mentioning and keeping an eye on are:

 

  • The AI Act: Europe has taken the lead as a trailblazing regulation on Artificial Intelligence. Innovation-friendly, the Act provides a light governance structure and a risk-based approach, with fines that can reach up to 30 million euros or 6% of the organization's global income. The Council and Parliament agreed on a deal regarding the Act on December 9, 2023, and will now need to formally adopt it to become EU law.
  • The Blueprint for an AI Bill of Rights in the United States: Not enforceable for now and without a federal status, it guides the design and use of AI using a principle-based guidance document.
  • Chinese AI governance: China uses three different approaches for AI - rules for online algorithms with a focus on public opinion, trustworthy AI systems, and AI ethics principles through a tech ethics review board.

 

Overall, the number of AI-related bills passed into law globally has dramatically risen and should continue to grow.

 

26 - AI and privacy - Body 3

 

When asked about the outlook for AI regulation in the United States, our Vice President of Product Development and U.S. privacy expert, Jeff Wheeler, expresses skepticism towards the prospect and draws a parallel with the current patchwork of state data privacy laws in place: 

 

“Unfortunately, the way things look right now, especially with geopolitical issues and the upcoming election next year, I don't foresee a lot of regulations coming around from the U.S. anytime soon. I think what we will see is similar to what we saw in privacy where we're going to have an individual state like California come in and start the domino effect.”

- Jeff Wheeler, Vice President of Product Development at Didomi (source: Didomi webinar)

 

 

While regulators worldwide work on streamlining their efforts regarding Artificial Intelligence, what are some things organizations can implement today to ensure an ethical and responsible use of AI?

 

Ethical and responsible use of AI: what can organizations do today?

 

In a keynote at the EBG La Rochelle event last October, Didomi CEO and co-founder Romain Gauthier shared some of his insights on the privacy challenges of a legal, ethical, and responsible approach to AI, listing 7 requirements:

 

  1. Human involvement: Fundamental rights, human action, and control.
  2. Technical robustness and safety: Attack resilience and security, contingency plans and general safety, accuracy, reliability, and reproducibility.
  3. Privacy and data governance: Privacy, data quality and integrity, and data access.
  4. Transparency: Traceability, explicability, and communication.
  5. Diversity, non-discrimination, and equity: Absence of unfair bias, accessibility and universal design, and stakeholder participation
  6. Societal and environmental well-being: Sustainability and respect for the environment, social impact, society and democracy
  7. Responsibility: Auditability, minimizing and communicating negative impacts, arbitration, and redress

 

The list, inspired by the ethics guidelines for trustworthy AI from the European Commission, echoes what Thomas Adhumeau, Jeff Wheeler, and Volha Litvinets discussed during our webinar, with our Chief Privacy Officer highlighting that an ethical approach to AI is, in many ways, very similar to what you could consider traditional data privacy best practices.

 

“From a compliance standpoint, it's not so different from what you would do with any other tool you use. The only difference, perhaps, is that it will be more difficult to track all the data flows and, on top of that, for what purposes the tool is being used. I think it makes it a tiny bit more challenging, but ultimately, the principles should remain the same, at least from a privacy standpoint.”

- Thomas Adhumeau, Chief Privacy Officer at Didomi (source: Didomi webinar)

 

Artificial intelligence and data privacy checklist

To mitigate risks and promote ethical and responsible AI practices in your organization, the team at Didomi has put together a short checklist you can use, share, and download:

 

26 - AI and privacy - Body 4

 

While the checklist (available in PDF format here) establishes essential, high-level steps every organization should consider when implementing AI tools internally, how can Didomi, in practical terms, help with your AI challenges?

 

How Didomi can help with balancing AI and data privacy

 

Our CEO Romain Gauthier presented 3 ways Didomi can help organizations in building ethical AI practices in his keynote:

 

  1. Clearly explaining the purposes of processing data
  2. Defining the data categories (preferences) that will feed AI models
  3. Providing user-friendly (and privacy-focuses) workflows and user interfaces

 

While the first two points can be addressed by using a Consent Management Platform (CMP) and implementing customized preference customer journeys, the third one will be critical in how users perceive your efforts. Collecting and respecting their preferences gives customers control over their AI experience and promotes trust-based relationships.

 

This is a foundational block of our Privacy UX belief and one of the key reasons behind our Global Privacy UX Solutions, helping organizations create compliant user experiences that respect people's choices and strengthen the bonds between brands and their customers.

 

26 - AI and privacy - Body 5

 

To learn more about artificial intelligence and data privacy, follow us on LinkedIn and book a time with one of your experts to discuss your compliance challenges:

 

{{talk-to-an-expert}}