Pseudonymising data to meet anonymisation standards
The contents of this article do not constitute legal advice and are provided for general information purposes only.
TEXpert AI is a third-party diversity data solutions partner that enables GDPR-compliant DE&I data collection and analytics as part of its broader offering to match diverse candidates with employment opportunities. In a previous article, we discussed how anonymising diversity data may help companies legitimately collect DE&I data to implement data-driven diversity strategies. This time, we will dive deeper to understand how anonymisation can be achieved through third parties and pseudonymisation, a de-identification procedure whereby data subjects cannot be identified from a dataset without additional information that is generally kept separate to protect identity.
Christopher Wiechert on Pseudonymised Data and UK GDPR
Is pseudonymised diversity data subject to the UK GDPR?
The UK GDPR is clear that pseudonymisation does not take personal data outside the scope of the UK GDPR. Anonymised data, on the other hand, is not subject to the UK GDPR. However, there can be a fine line between anonymised and pseudonymised data. Certain pseudonymisation techniques such as encryption may be sophisticated enough that an individual can no longer be identified from the data. At this point, the data is more appropriately classified as anonymised data, so long as the data controller cannot use reasonable means to re-identify the data subject. The answer may lie with who holds the key to de-anonymisation.
How can a third-party help with pseudonymisation?
A realistic scenario where data that has been pseudonymised can fall outside the scope of the UK GDPR is when a third party is involved.[1] For example, assume Party A shares pseudonymised personal data with Party B through an ID generation system, Party B collects diversity data for Party A against the ID only and both Parties have no reasonable means to access and connect both datasets or re-identify individuals. Here, the data is anonymous to Party B and Party A has no exposure to identifiable diversity data but only aggregated data as provided by Party B as a service.
The UK ICO recognises that it is possible that data may not relate to an identifiable individual for one controller, but in the hands of another controller, it does.[2] The relevant test remains the anonymisation standard: whether it is reasonably likely that an individual can be identified. So long as this is satisfied, the data is anonymised and not subject to the UK GDPR.
Three simple rules to mitigate risks of identification in data handling. 1st rule Collect personal data only if needed. 2nd rule If personal data is really needed, then start by pseudonymising it. 3rd rule When using and sharing pseudonymised data, ensure that anonymisation has been achieved i.e an individual is not “reasonably likely” identifiable.
What are the newest anonymisation and pseudonymisation techniques being discussed? At the Internet Privacy Engineering (IPEN) webinar in December 2021, organised by the European Data Protection Supervisor, leading privacy professionals offered guidance on the practical use of pseudonymisation techniques, including cryptography.
Cryptography is a technique that can be used for both anonymisation and pseudonymisation to securely protect sensitive information. Hashing is a simple cryptography technique, turning personal information into a value that does not contain any personal data.
A more advanced cryptographic technique is user-generated pseudonyms.[3] With this method, the user holds the key to decryption, which can be shared with the controller only when necessary. For example, a subway system that must track riders for payment could implement this system. Each subway rider has a card with a unique number that is a combination of the rider’s social security number and a PIN generated by the cardholder. The combination of this information would then be hashed by the controller. Anyone with access to the card, including the controller, would not be able to reverse the social security number because it will not have the user-generated pseudonym. If the owner wanted to prove ownership of the card, perhaps to dispute a charge, the user could provide his or her PIN to generate the unique number.
The user-generated pseudonyms technique could also be applied in a situation that involves a third party that stores Diversity Data for data subjects, where they can access and change their own DE&I information by generating an ID or pseudonym. Protecting personal data through these techniques not only helps reduce the risks to data subjects, but helps the controller meet data protection obligations. Anonymisation and pseudonymisation should be one of the first considerations for any controller that is GDPR-compliant.
[1] Mourby, M., Mackey, E., Elliot, M., Gowans, H., Wallace, S., & Bell, J. et al. (2018). Are ‘pseudonymised’ data always personal data? Implications of the GDPR for administrative data research in the UK. Computer Law & Security Review, 34(2), 222–233. [2] What is personal data?.ICO. (n.d.). Retrieved 17 March 2022, from https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/key-definitions/what-is-personal-data/ [3] IPEN webinar 2021: “Pseudonymous data: processing personal data while mitigating risks”. Retrieved 17 March 2022 from, https://edps.europa.eu/ipen-webinar-2021-pseudonymous-data-processing-personal-data-while-mitigating-risks_en>
コメント