fbpx

Clearview Data Breach

THIS ACTION IS NOW CLOSED

In May 2022, the Information Commissioner’s Office (ICO) fined Clearview AI Inc. over £7.5 million for illegally scraping images of people from the internet. But despite the ICO’s fine, those affected by the data breach will not receive a penny.

What happened in the Clearview data protection violation?

Clearview AI is an American facial recognition company that gathers facial images and personal data from publicly available information on the internet and social media. The company claims to have the largest known database of facial images, with some figures estimating that it has collected more than 20 billion images.

Clearview scrapped the internet to find images, including those of UK residents, and added these to its global facial recognition database without their knowledge or permission.

There are serious privacy concerns over facial recognition software in the UK. And in the UK, the processing of biometric data (such as images of a person’s face) is only allowed in very explicit circumstances. Consent also must be provided. As such, we believe that Clearview breached UK data protection laws.

If your image has been publicly available online since 2009, including on social media, Clearview very likely breached your data protection rights.

The ICO and the Australian Information Commissioner undertook a joint investigation into facial recognition company

In July 2020, the UK’s Information Commissioner’s Office (ICO) and the Office of the Australian Information Commissioner (OAIC) began a joint investigation into the personal information handling practices of Clearview AI Inc.

The joint investigation was launched after concerns were raised about Clearview’s data collection methods. The company was accused of scraping the images of people from the internet – including social media – and adding these to its database without seeking permission to do so.

Clearview sells access to this data – including to law enforcement agencies – using the facial recognition algorithm to seek matches and track possible suspects.

However, using data in this way is highly controversial, and, in an investigation that lasted over 15 months, the ICO and the OAIC worked together to look at how the technology was being used and whether any formal regulatory action was needed.

The OAIC released its ruling in November 2021, stating that the firm had breached Australians’ privacy.

The main issue was that the data processing was not transparent. As a result, the regulator ordered the company to stop collecting the photos and delete all the pictures of Australian citizens. 

In May 2022, the Information Commissioner’s Office fined Clearview AI Inc. over £7.5 million for illegally scraping images of people from the internet.

The ICO found that Clearview AI breached UK data protection laws by failing to:

  • Use information of UK residents in a manner that is fair and transparent;
  • Have a lawful reason for collecting people’s facial image and data;
  • Have a process in place to prevent data from being retained indefinitely; and
  • Meet the higher standards of data protection that is expected when dealing with biometric data.

As well as the fine, the ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is available in the public domain, and to delete the data of UK residents from its systems.

But those affected by the data breach did not receive a penny.

The problem with facial recognition software

Clearview allows its customers – including the police and businesses – to upload a photograph of someone and try to identify them. It does this by matching the image to those already held in its database.  

But there are several ethical and privacy concerns over facial recognition software – especially when it comes to profiling and automated decision-making about individuals. In particular, women and people of BAME groups are being discriminated against by companies that use such tech. This is because leading facial-recognition software packages are much worse at identifying women and people of colour than classifying male, white faces.  Big Brother Watch – a British civil liberties and privacy campaigning organisation – described how one black schoolboy was “swooped by four officers, put up against a wall, fingerprinted, phone taken, before police realised the face recognition had got the wrong guy”.

The UK Information Commissioner said that Clearview “not only enables identification” of the people in its database “but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable.”

Clearview data breach timeline

  • 2009
    Clearview begins scrapping the internet for facial images and personal data.
  • 2020
    The Information Commissioner’s Office (ICO) and the Office of the Australian Information Commissioner (OAIC) began a joint investigation into the personal information handling practices of Clearview AI Inc. .
  • November 2021
    The OAIC released its ruling, stating that Clearview had breached Australians’ privacy. The regulator ordered the company to stop collecting the photos and delete all the pictures of Australian citizens. This effectively closed down Clearview’s operations in Australia.
  • May 2022
    the ICO fined Clearview more than £7.7 million for data protection breaches. The UK’s data protection regulator also ordered the facial recognition database company to delete all its UK data.