by Vittorio Compagno for the Carl Kruse Blog
Facial recognition is one of the most effective methods of personal identification even if its downsides are becoming more apparent. Often appearing in movies or science fiction books, facial recognition seems a natural evolution to fingerprint recognition. The promises of this technology are plenty. With relatively high degrees of accuracy and minimal equipment required, it’s been adopted by companies for employee recognition and by nations for public safety. Many have wondered if it is ethical to use a distinctive feature such as a person’s face as an instrument of recognition, especially when this technology could be exploited for illegitimate purposes.
How it Works
Facial recognition has years of machine learning expertise behind it evolving tremendously since its beginnings in which under performing computers could not distinguish a cat from a fire hydrant. Nowadays, using the so-called “superbrains” of Google and IBM facial recognition is at another level entirely.
Machine learning consists of iterative learning by a machine that has adequate computational skills of scenarios or objects chosen by the programmer. This technology consists in making a supercomputer swallow as much data as possible so it can slowly learn what a programmer wants to teach it, through so-called “generations”.
The machine learning process relies heavily on Darwinian evolutionary dynamics, as for certain scenarios, the computer gives precedence to the production of generations that were successful in the previous scenario. Thus, slowly, the “species” produced by the machine adapts itself to a scenario until it finds a perfect balance defined by the parameters set by the programmer.
A program takes into consideration details that make up a face, such as the distance between the eyes, the shape of the chin , or the color of the skin. This data is then converted into vector representations, which are stored in a database. The distinctive features of a face are unique from person to person, and the comparison of different facial data helps to distinguish one subject from another.
Who Uses it?
Facial recognition technology is used behind the famous Deep Fake, which can give hilarious or frightening results, based on the good (or bad) use of the technology. In any case, it is clear that great computational power is needed to convert a person’s facial features into mathematical models, and to have the structure and resources to store them in a database, or to collect them in the first place. However, thanks to open source communities, recognition can be done at an amateur level with a laptop, with already proven code and a webcam.
The results may be the same, but the proportion of individuals who can be identified obviously changes. In fact, the resources and technologies for such a large-scale application are held by large companies such as TrueFace, AnyVision and Kairos. The largest states have already contracted these companies for the installation of facial recognition technologies in major cities. China, the United States and the UK are the first of dozens of nations that are joining the evolution of mass surveillance facilitated by facial recognition. Is all of this necessary, or are we sacrificing our privacy for added security?
What is it Used for
Facial recognition is used today for myriad purposes. From unlocking phones, to employee recognition in large companies, to police applications. This technology has proven useful precisely because of its remote application. It is in fact logical to think that if physical contact is required for fingerprints, the main method of individual recognition so far, for facial recognition it is necessary that the interested party falls only within a field of view of a video camera. This convenience has been exploited by the police forces of the most advanced countries, which for years have been solving crimes based on the correspondence of faces already registered with the face of the person who committed the crime.
A side effect of this new surveillance technology is the need to collect as much data as possible about thousands of individuals, and then compare them to find the interested subject. Mass surveillance is, and will continue to be, an essential necessity for facial recognition to be effective.
But what if the use of this cutting-edge technology is disproportionate to the purpose? Is collecting thousands of data points about individual faces really worth it against all the ways this practice could backfire on those being monitored?
The Chinese case
The fact that Chinese society seems to have come out of an episode of Black Mirror is nowadays clear to everyone. From repression related to freedom of expression, to the one-child policy, a society full of individual restrictions is not a virtuous model for facial recognition applications.
However, given the dystopian state in which Chinese citizens find themselves in, it is appropriate to understand where evil human intentions could go.
In Chinese territory, the repression, if not the extermination, of an Islamic community present in mainland China, the Uyghur, has only recently been acknowledged.
I will avoid getting into the reasons that justify such a crime in the eyes of the Chinese government (more info here) it is clear even to the least malicious person how facial recognition can be used to identify members of a minority group and persecute them.
The systematic search for Uyghur facial features within Chinese communities was reported between 2019 and 2020 to the international community, but almost nothing has been done to stop that.
Unfortunately, the pandemic did not help the minority hide from repression, although it was thought that face masks were a barrier against indiscriminate facial recognition. However, despite the difficulties, the engineers of Huawei and Alibaba, two Chinese tech giants engaged in facial recognition in mainland China, have managed to develop software to identify the “enemies” of the state even with masks. So it is not surprising that the persecution of this minority has not stopped, even in times of pandemic, and that the problem of internment in labor camps of the Uyghur minority remains a problem. All of this is perhaps commentary on what the thirst for power can do, and how acts such as racial persecution are facilitated by new facial recognition technology
An ethical matter
When it comes to this innovative technology, we don’t have to wonder if it will be used for nefarious purposes, but when. Unfortunately, the Chinese case is one example, but let’s not think the problem lies only in the East. In fact, even in the West we have to face the use of facial recognition technologies for “security” purposes. The British, Americans and even our European cousins know this, and they are seeing themselves overwhelmed by this increasingly indiscriminate and systematic form of control. This raises an ethical question, is it right to give up your privacy to reduce the crime rate? Personal views obviously give no for an answer, and citizen organizations have often fought against the use of facial recognition for surveillance purposes. However, even the organs of power are slowly taking sides with the population. This has been confirmed by the Italian Privacy Guarantor, who recently decreed the use of face recognition for surveillance purposes as illegitimate, truncating the hopes of the police.
Prevention is better than cure
Someone who trusts our governments may suggest that the use of facial recognition technology is for the good of all. One might think that it legitimate to significantly reduce crime, for example, but experience shows it easy to abuse this technology.
First only criminals were targeted, now participants in peaceful demonstrations (you never know, right?), tomorrow all the citizens who don’t go around with face coverings.
Systematic control is an eventuality that will reveal itself to most people sooner or later, and at that moment it will be too late to stop it. We must act today, and oppose facial recognition as a surveillance tool, as it not only affects individual privacy, but also lays the foundations for a future in which everyone’s facial features will be in a database, ready to be stored and analyzed. Who can say with certainty that the algorithm won’t make mistakes? There are already several cases of errors in this technology, and the more the scale of adoption increases, the more mistakes increase.
If those cases of unjustly incarcerated men and women seem so distant, but at the same time distressing, just think that a camera equipped with facial recognition software can be installed in your city while you are reading this article. Is it so absurd to think that, instead of being the victim of a mistake, someone should prevent it from ever happening, before it is out of control?
This Carl Kruse blog homepage: http://carlkruse.at
Contact: carl AT carlkruse DOT com
Other articles by Vittorio Compagno include Quitting Social Media and Dealing With AI Rights.
The blog’s last article was Grimes, Music and the Future of AI Art.
Find Carl Kruse also on the Ivy Circle.