The Federal Trade Commission (FTC) has issued a warning over the possibility of consumer biometric data being used improperly in connection with cutting-edge technologies like machine learning and generative artificial intelligence (AI).
The FTC issued a policy statement last week warning that threats to consumer privacy and data exist with the increasingly widespread use of biometric data, including by technology fueled by machine learning and AI. Biometric data includes measurements of a person’s physique or voiceprint, as well as information that depicts or describes physical, biological, or behavioral qualities and attributes.
Samuel Levine, head of the FTC’s Bureau of Consumer Protection, stated that in recent years, biometric surveillance has become more sophisticated and pervasive, posing significant concerns to privacy and civil rights. According to today’s policy statement, businesses must abide with the law regardless of the technology they employ.
The FTC release cited several instances when customers’ biometric data might be utilized improperly to violate their civil rights and privacy:
For instance, employing biometric information technology to locate consumers could expose sensitive personal information about them, such as whether they used specific healthcare services, went to religious services, or joined union or political gatherings. Large biometric data sets could potentially be appealing targets for bad actors who would abuse this information. Additionally, some biometric technologies, like facial recognition software, may have higher error rates for specific populations than for others.
- Gathering biometric data without first evaluating potential risks to customers;
- Failing to take steps to lessen or remove the dangers;
- Collecting or using customers’ biometric information covertly or unforeseenly;
failure to assess the competence and business practices of outside companies who will have access to biometric data; - Not giving workers and contractors that work with biometric data the proper training; and
- Failing to keep track of biometric data-related technologies to make sure they’re reliable and unlikely to endanger consumers.
According to research from the National Institute of Standards and Technology, machine learning developments have helped facial recognition technology improve their ability to locate a matching photo from a database by 20 times between 2014 and 2018. This was acknowledged in the FTC’s policy statement.
According to the statement, The developments in machine learning, along with data gathering, storage, and processing capacities adequate to enable the use of these technologies, are largely responsible for these advancements.
In particular, so-called “deepfakes” use biometric data to create phony images and audio that, to an unaware observer or listener, appear realistic. The FTC issued a warning that deepfakes might make it possible for criminals to mimic people in a convincing way in order to conduct fraud or to harass or slander the people they portray.
A recent Senate committee hearing demonstrated right away how sophisticated generative AI technology have grown at mimicking a person’s biometric traits, including their voice.
The Senate Judiciary Subcommittee on Privacy and Technology is chaired by Sen. Richard Blumenthal, D-Conn., who started the hearing by playing audio of an opening statement that eerily sounded like the senator himself, even though the text was written by OpenAI’s ChatGPT and the audio was produced by voice cloning software.
According to Blumenthal, the words and voice weren’t his. The senator said, The audio and his playing it may seem you as hilarious, but what echoed in his thoughts was what if he had asked it and what if it had been an endorsement of Ukraine capitulating or Vladimir Putin’s leadership? That may have been quite terrifying.