In August, the Swedish data protection authority (i.e. the Swedish equivalent of the UK’s ICO), the Data Inspectorate, has issued a penalty fee of SEK 200,000 (approx. £16,795) to a High School that trialled the use of facial recognition via camera to record student attendance in a class.
Despite the system being used in a limited trial (only one class with 22 students participating) and having sought consent from guardians of the pupils, the Data Inspectorate found the school board:
- Should not have relied on consent as the lawful basis of processing. This is because a school has a certain amount of control over pupils who are dependent on the school (because of grades, funding, etc.) and therefore consent wasn’t freely given
- Were able to rely on achieving the same aim (attendance monitoring) in a non-privacy intrusive way, i.e. it was possible to use existing manual methods to monitor attendance in the class, thus infringing the purpose and data minimisation data protection principles
- Had not identified an appropriate lawful purpose for processing special category data (since consent was not a valid lawful basis for processing), i.e. the biometric data (facial recognition)
- Finally, and very significantly, the school had not carried out a DPIA nor consulted with the Data Inspectorate over the risks from the processing. A Data Protection Impact Assessment is required when there is high-risk from the processing or if new technologies are being used: the processing was high-risk because it included biometric data and related to children, plus facial recognition is seen as a new processing technology.
Of course, this isn’t the first facial recognition case to make it into the news recently. The ICO are currently considering what to do about the live facial recognition tech being used at King’s Cross station – they’re currently investigating whether the processing is legal and under what circumstances after it was revealed that King’s Cross developers were using the system for public safety – the test in this case is likely to be whether the system meets the requirements under Article 9 (processing special category data) particularly about whether a general security system constitutes processing carried out “for reasons of substantial public interest“. Plus, there’s the case of the ICO investigation into the Met Policies use of live facial recognition.
All these cases bring into question the ability to actually use facial recognition technology. If the tech is using imagery to identify individuals then it is both processing personal data (the identification bit) and special category data (biometric data used for id purposes is considered special category and therefore afforded special controls/protections), then data protection law applies and in this case it applies in two ways:
- There needs to be a lawful basis for processing and as the Swedish case highlights depending on the circumstances, even consent won’t be sufficient, and in the other cases, how do you get consent (if it is relevant) from unknown people in a crowd?
- A special category legal purpose for processing also has to apply and these are very limiting
A lot of the outcomes to these cases in the UK are likely to rest on the outcome of the R (Bridges) v Chief Constable of South Wales Police (SWP) case where someone is challenging the police in Wales for using live facial recognition.