Skip to main content

The destruction of the algorithm: the new sanction for breaching the GDPR?

|

Since 2019, The Federlñ Trade Commission (FTC) has been seeking ways to punish the new digital unfair practices, consisting on illegally obtaining personal information from Internet users and exploit it with artificial intelligence tools. In this case, the punishment consists on what is called “the destruction of the algorithm”.

Since the resurgence of Artificial Intelligence, one of the main debates has been about the ethics of the algorithm. Indeed, many advocate a future based on human principles and values, which, by extension, they propose to apply to the behavior of robots. But few are those who propose effective sanctions for companies that unlawfully use the personal data with which they train algorithms.

An example of this is the US Federal Trade Commission (FTC), which since 2019 has been seeking ways to punish this type of new digital unfair practices, consisting of illegally obtaining personal information from Internet users to exploit it with artificial intelligence tools.

One of these measures consists of prohibiting the use of the result of the illicit activity, which has been called "the destruction of the algorithm". Through this measure, the regulator urges the cessation of the use of the algorithm, as a technological asset resulting from the unlawful activity, because it is in it where the economic value lies.

This sanction substitutes or complements the sanction requiring the deletion of the data obtained, since it is not effective. This is because, even if we delete the illegally obtained information, the algorithm has already learned from it, achieving the desired objective. This is known as the "algorithmic shadow". On the other hand, if the sanction involves the elimination of the results, a priori this can have an effective deterrent impact on those companies that do not apply a sufficient degree of diligence in the use of this technology.

As an example, on March 4, 2022, the FTC reached a settlement in the sanctioning procedure initiated against the well-known company Weight Watchers, for the operation of its Kurbo application, an app that offered advice on healthy eating. Due to the omission of sufficient control measures, the application allowed minors to register (information from 8-year-old children was detected), which led them to provide personal information without their parents' consent and, therefore, in an illegal manner. The resolution of the American regulator forced the company to delete the illegally obtained data and, in addition to a fine of 1.5 million dollars, ordered it to remove the algorithms or other artificial intelligence models applied to this activity or obtained thanks to it.

In 2019, the FTC already innovated when intervening in the Cambridge Analytica scandal, when it was shown that such platform did not sufficiently protect the privacy of its users. In this case, said agency forced the company to delete all the information it had illegally collected from Facebook users, which included the algorithms used for that purpose and, also, the results obtained through that practice.

Shortly thereafter, the FTC was confronted with another case, this time involving the company Everalbum. This company was the owner of a photo-sharing application, which was accused of using facial recognition without users' authorization and without offering them the possibility to object to it.

In this case, the Commission forced Everalbum to delete all photographs, videos and biometric data obtained through its application, and to remove "any models or algorithms developed, in whole or in part" using such data.

By way of "algorithmic justice," the premise of the FTC's approach seems clear: do not allow companies that violate data protection to enrich themselves from the illicit use of the personal information they obtain, either directly through its exploitation or through its use to create or train their algorithms. In short, the objective pursued is to send a message to companies that are considering breaking the law: think twice, because it does not pay.

This situation raises an interesting legal debate, since in the evolution of the data economy in which we live, algorithms are fundamental processing tools, which are protected, among others, by the rules on business secrets and intellectual property. However, it is no less true that the use of algorithms may not infringe data protection regulations, nor may they violate the privacy laws.

 

Article provided by INPLP member: Francisco Perez Bes and Esmeralda Saracíbar (ECIX Group, Spain)

 

 

Discover more about the INPLP and the INPLP-Members

Dr. Tobias Höllwarth (Managing Director INPLP)

Cloud Privacy Check (CPC). Data Privacy Compliance in the Cloud Made Easy

Understand Cloud and Data Protection Law in only 4 easy steps. Plus highly relevant legal information for 33 countries. Provided by EuroCloud and 53 European lawyers.

VIEW STREAM

About Us

EuroCloud is an independent non-profit organization and consists of a two-tier setup where organisations form all European countries can apply to participate in as long as they respect the EuroCloud Statutes.

To act as a true European player, all programs that are developed are intended to be European activities. These European programs are the strength of EuroCloud as a whole. Respect to local cultures along with the will to promote a real European spirit.

{$page.footerData}