Recently, Clearview AI, a facial recognition company, told its investors that its new plans include collect 100 billion photos of human beings. As they stated, this would make their facial recognition technology, coupled with the power of artificial intelligence, able to recognize almost any human being on earth.
With 7 billion inhabitants in the world, the amount pointed out by Clearview generates a average of 14 photos of each individual on the planet. Furthermore, they assure that said reconnaissance power could be used to help surveillance systems and criminal investigations around the world.
These images were collected by Clearview from the Internet. According to the company’s CEO, Hoan Ton-That, “Clearview AI’s publicly available image database is legally collected, like any other search engine, including Google.” However, it seems that some agencies are not happy with the panorama projected by this new business plan; and for good reason.
Years ago, there was talk of how the Chinese government sought to control its population through the facial recognition it had successfully implemented in its country. Now we face the same thing, only worldwide. What dangers could result from an artificial intelligence capable of recognizing anyone in the planet? Beyond the good intentions that Clearview claims to have, is it really ethical to store the face of every person in the world? What could be the consequences?
Dangers to civil liberties and the right to anonymity
During the Hong Kong protests in late 2019, protesters took precautionary measures to protect their identities. In China, facial recognition technology has been used for some time to monitor and arrest people who may be linked to crimes in the Xinjiang region, according to a report. Washington Post. For that, the consequences for the protesters seeing their faces filmed would be devastating.
According to Clearview, its face database reached the figure of 10 billion images; adding 1.5 billion images every month. They are looking to raise around $50 million from their investors, a figure that would allow them to continue towards the goal of 100 billion images.
Andrés Ruiz, privacy attorney at Metricson, commented The country that this type of technology can be quite serious for security and privacy of people. And it is that, although as a society we feel that our face does not matter or hide information; biometric data is a precious asset and which allows us to be recognized at any time if the database contains our information.
These widely used facial recognition systems can have serious effects on the privacy of individuals, since they can easily capture biometric data without the knowledge of the person concerned; extensive and indiscriminate use could end anonymity in public and private spaces and allow continuous surveillance of individuals
Andrés Ruiz, Privacy Lawyer at Metricson
“The right to privacy involves deciding when we disclose personal information.” With this phrase, the web Side Bits reminded us of the capacity for self-regulation offered by The right to privacy. As human beings with a right to privacy, we have the ability to decide when we want to disclose personal information and when you want to abstain. It’s not that facial recognition or artificial intelligence are negative tools per se. Nor does it mean that they will be used in all sorts of dystopian machinations. However, this raises several questions:
We will have right to decide if we want to be in the database? When will they use our information? For what purposes? What happens if the information is seen engaged by malicious companies or agents?
What would happen if companies sold our biometric data
In a world where all kinds of data are used to personalize the user experience and sell products and services related to your needs, facial recognition is just one of them. Currently, technologies such as the Microsoft Cognitive Services Face API allow us to analyze images of people. This way, you can get data such as gender, pose, age, and facial hair.
However, just as we have had to regulate the use of cookies due to predatory actions by companies such as Facebook; facial recognition technology and artificial intelligence will open the door to another type of marketing; and could target a more vulnerable group of users.
What if an AI starts “recommending aesthetic services” to people based on their physical appearance? Will people’s expressions be used to segment them between extreme emotions and recommend related services and products? And if these data start being sold without consent to third-party companies? On this occasion, we are not just talking about our browsing habits, as in the case of cookies, we are talking about sell our identity. And that opens the door to something darker.
The risk of suffering identity theft it’s even bigger. If malicious companies or agents manage to steal people’s biometric data; we could be in the greatest danger of identity theft. Currently, we already live in a world where we can unlock the mobile using our face; where the only thing standing between access to our bank account and private data is a cluster of sensors for facial detection. For that, what happens if this information falls into the wrong hands?
Artificial intelligence increases the likelihood of being judged on appearance
as collected Washington Post in his report, facial recognition technology was designed and repeatedly tested on blanks. This, of course, ensures a fairly large detection success in this group. However, when we talk about other groups, like Latinos or blacks, detection is not as accurate. Of course, this could lead to the creation of appearance-based judgment by a still-failing artificial intelligence.
Also, in more alarmist cases, we might be about to be part of the “Dive Down” episode of black mirror. Could artificial intelligence determine our ability to obtain credit? What about our professional expectations?
On the other hand, although a real artificial intelligence should be able to determine patterns on its own; what if he is manipulated to sabotage people based on their appearance or lifestyle? Perhaps today the dangers of this information being manipulated by authoritarian governments may not be imminent (or at least not in much of the West). However, what are the chances that a government could determine the fate of its inhabitants based on judgments such as sexual orientation; political/religious alignment and personal traits such as skin color?
Facebook, Google and Twitter against Clearview’s AI
So far, there are no federal laws to regulate the operation of an artificial intelligence and up to what capacities can it be extended. Because of this, big companies in the tech industry have decided to keep the technology on hold until there are more regulations and laws on how it should be used. Among these companies we have Microsoft, IBM, Google and Amazon.
For Clearview, however, collecting 14 times the face of every person on the planet it’s just another business opportunity; nothing more. Likewise, the company assures that its product is more complete than that of China; and that is because it is done by collecting images from public sources and related social information.
However, Facebook, Google, YouTube, and Twitter already require a decoupling of the Clearview narrative. Companies demand that the company stop taking photos of your platforms, and delete all those collected previously. However, Clearview decided to withdraw the US First Amendment letter to protect its interests.
Facebook, which prohibits automated copying, or “scraping”, of data from its platform and has an external data misuse team, banned the Clearview founder from his website, Hoan Ton-That, and sent the company a cease and desist order, but Clearview declined to provide information on the extent to which users’ Facebook and Instagram photos remain in the database. Clearview data; An official from Facebook’s parent company, Meta, said The post office.
Washington Post