Scientists created a taxonomy for AI privacy risks

The researchers identified 12 privacy risks.

Share

Privacy is a fundamental principle for developing ethical AI systems. However, as AI grows faster than rules, it is primarily the developers’ job to manage privacy threats in AI-incorporated products and services.

It is, in fact, a difficult challenge for AI practitioners. To handle privacy problems effectively, it’s crucial to first identify them precisely within the research and development phases of new technologies.

By examining 321 recorded AI privacy events, scientists at CyLab have created a taxonomy of privacy risks associated with AI. Their objective was to categorize how the unique needs and capabilities of AI technologies, as described in these occurrences, led to the creation of new privacy issues, heightened already-existing ones, or did not materially alter already-known risks.

The scientists used Daniel J. Solove’s 2006 publication “A Taxonomy of Privacy,” which describes classical privacy problems that predate contemporary AI breakthroughs, as a baseline. They next assessed if and to what extent the recorded AI privacy events corresponded with Solove’s taxonomy.

Sauvik Das, assistant professor at Carnegie Mellon University’s Human-Computer Interaction Institute (HCII), said, “If the incidents where we’re seeing the AI causing harm is challenging that taxonomy, then that’s an instance where AI has changed privacy harm in some way. But if the incident fits neatly into the taxonomy, then that’s an instance where maybe it’s just exacerbated the existing harm, or maybe it hasn’t meaningfully changed that privacy harm at all.”

Through their analysis of documented AI privacy incidents using Solove’s taxonomy as a framework, the team identified 12 high-level privacy risks newly created or exacerbated by AI technologies. These risks are outlined in the table below:

New Risks (Purple):

  • Identification
  • Distortion
  • Physiognomy
  • Unwanted disclosure

Exacerbated Risks (Light Blue):

  • Surveillance
  • Exclusion
  • Secondary use
  • Data breaches due to insecurity

“We set a divide as it relates to products and services and in two ways that pipe into the taxonomy: the requirements of AI and the capabilities of AI,” said Das.

“The requirements of AI refers to ways that the data and infrastructural requirements of AI exacerbated privacy risks already captured in Solove’s taxonomy. 

“AI’s capabilities refer to its ability to do things like infer information about users to predict where they’re going to go next or what they’re going to do next.”

The researchers cited two examples of newly developed privacy hazards from AI technologies: the spread of deepfake pornography and physiognomy, the discredited practice of judging someone’s character based on visual traits.

Das claims that Solove’s taxonomy has a category named “distortion” that deals with situations in which a person’s personal information may be exploited against them. Although deepfakes are typically included in this category, AI’s capacity to produce photorealistic content based on an individual’s information in various contexts is novel. AI has significantly altered this element, creating a new category of distortion dangers that did not exist.

In May, Das and his team will present their findings at the Association for Computing Machinery 2024 Computer-Human Interaction Conference in Honolulu. They aim to develop their research further to provide practitioners and regulators with a valuable tool to mitigate privacy risks when developing and managing AI technologies.

Journal Reference:

  1. Hao-Ping (Hank) Lee, Yu-Ju Yang et al. Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks.

Trending