Every person engaged with the networked world constantly creates rivers of data. We do this in ways we are aware of, and ways that we aren’t. Corporations are eager to take advantage. NumberEight, a startup, claims that, according to Wired:

“It helps apps infer user activity based on data from a smartphone’s sensors: whether they’re running or seated, near a park or museum, driving or riding a train. New services based on such technology will combine what they know about a user’s activity on their own apps with information on what they’re doing physically at the time. With this information, instead of building a profile to target, say, women over 35, a service could target ads to ‘early risers.’”

Such ambitions are widespread. As this recent Harvard Business Review article puts it:

“Most CEOs recognize that artificial intelligence has the potential to completely change how organizations work. They can envision a future in which, for example, retailers deliver individualized products before customers even request them—perhaps on the very same day those products are made.” As corporations use AI in more and more distinct domains, the article foretells, “their AI capabilities will rapidly compound, and they’ll find that the future they imagined is actually closer than it once appeared.”

If an algorithm discriminates against people by sorting them into groups that do not fall into these protected classes, antidiscrimination laws don’t apply in the United States. (Profiling techniques like those Facebook uses to help machine-learning models sort users are probably illegal under European Union data protection laws, but this has not yet been litigated.) Many people will not even know that they were profiled or discriminated against, which makes it tough to bring legal action. They no longer feel the unfairness, the injustice, firsthand—and that has historically been a precondition to launching a claim.

They can work hand in hand with campaigns to change public opinion, especially in consumer cases (for example, by forcing Big Tobacco to admit to the link between smoking and cancer, or by paving the way for car seatbelt laws). They are powerful tools when there are thousands, if not millions, of similar individual harms, which add up to help prove causation. Part of the problem is getting the right information to sue in the first place. Government efforts, like a lawsuit brought against Facebook in December by the Federal Trade Commission (FTC) and a group of 46 states, are crucial. As the tech journalist Gilad Edelman puts it,

“According to the lawsuits, the erosion of user privacy over time is a form of consumer harm—a social network that protects user data less is an inferior product—that tips Facebook from a mere monopoly to an illegal one.” In the US, as the New York Times recently reported, private lawsuits, including class actions, often “lean on evidence unearthed by the government investigations.”

Importantly, the law enables advocacy organizations to request information on the functioning of an algorithm and the source code behind it even if they don’t represent a specific individual or claimant who is allegedly harmed. The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms. Laure Lucchesi, the director of Etalab, a French government office in charge of overseeing the bill, says that the law’s focus on algorithmic accountability was ahead of its time. Other laws, like the European General Data Protection Regulation (GDPR), focus too heavily on individual consent and privacy. But both the data and the algorithms need to be regulated.

 

 

Tags: , , , , , , , , , , , , , , , , , , , , ,
Editor @ DevStyleR