Last modified more than a year ago

Deep fakes to preserve participant privacy

2021-11-19

The term “deep-fake” is the term given to video, audio and images edited to realistically portray people in contexts and scenarios that did not occur.  Deep-fakes are known to be used for malicious purposes, with the majority of known deep-fakes portraying hoax images, altered to look like innocent people.  Other malevolent uses include deceiving biometric scanners, political subversion, and cyberbullying.  However deep-fakes are also used for entertainment and political commentary.  Fake videos produced of former President of the United States, Donald Trump and Queen Elizabeth were an example of sophisticated yet obviously satirical work, designed to entertain, and in some cases make a statement.  This technology is becoming a popular medium for free speech, comedy and artistic expression.  It can also have other productive uses, and in this article we will discuss the use of deep fakes in transport research methods, where the technology can be used to preserve the privacy of a study participant or a person passing by a vehicle recording data.

Large scale field operational tests or naturalistic driving studies have collected thousands of hours of everyday driving over the last decade. The data is classified as personal information and subject to data protection law and this personal data must be treated according to GDPR and with the highest respect from an ethical standpoint. It is common for research projects to collect video or images from voluntary participants, with their consent.

It is also increasingly common for consumer vehicles equipped with connectivity and driving assistance features to be fitted with external cameras collecting high definition video of surroundings.  If this data would be transferred outside of the vehicle, the privacy and security of people captured by these devices must be ensured, and privacy-preserving techniques could serve to disguise the identity of persons.

The researchers are not interested in the persons per se, rather the facial expressions and actions of the driver. So, how can we advance the research within this area, sharing more data than in the past, still preserving the personal integrity? One solution might be to use deep fakes! Deep fake technology is a technology that five years ago required significant processing capabilities, which today can be used on a decent workstation (at least for shorter sequences of data). By amalgamating thousands of images available, it is possible to apply a synthetically generated face to the person driving or sitting in a vehicle.  The face is generated from real images and video; the output is a unique but synthetic portrayal of eye glances, eye direction, head and body pose, useful for automated vehicle testing which requires thousands of hours of testing.  By using these realistic deep-fake avatars, testing data can be generated and analyzed for the improvement and development of new technology.  Researchers can look for interesting patterns of behaviour derived from these interactions, allowing researchers to understand behaviour and reactions to automated vehicles. This is a new use for the data and can be manually annotated by technicians or automatically by computers.

There are of course risks of implementing such feature. To start with, due to the large quantities of data and validating each and every frame is not possible. There is therefore a risk of the deep fakes not being assembled in a sufficiently anonymized format,  revealing the study participants. Also, when applying this layer to images or video,  bias may be introduced to the data, making other machine learning algorithms subsume these biases resulting in the end a poor AI model. In addition, adding the additional layer would create  an additional cost to the data collection and processing step in the project, even if in the long run, more data is produced.

Still, the technology is interesting and for some use cases we might see applications which support researchers and companies to handle less sensitive data, providing a great tool and incentive for data sharing!

Deep fakes to preserve participant privacy

Have feedback on this section??? Let us know!

Send feedback

Feedback

Please add your feedback in the field below.

Your feedback has been sent!
Thank you for your input.

An error occured...
Please try again later.