Paying by face: it’s the latest offer from Mastercard. Smile and wave at the camera next to the checkout and your face will be all that is needed to pay your bill.

Ok the waving part is a marketing gimmick (smile to pay is the cheesy promo line) but Mastercard’s biometric checkout system is ready to roll with the company claiming almost three quarters of its customers are happy to smile and go.

Currently being trialled in Brazil, if accepted customers need only install an app that will capture their picture and their payment information which then links to a handful of third-party companies (Fujitsu, Payface, PopID). At the check-out the customer’s face will be matched with the stored data and once their identity is verified the funds will be deducted automatically from their account.

At least under the Mastercard program you get the choice to opt in.

That’s not the case with other forms of facial recognition that are capturing our faces and identifying us to – goodness knows who – without any agreement on the part of individuals.

The many faces of the internet

Clearview AI, FindClone and PimEyes may not be household names like Mastercard but their intrusion into your life could be more impactful. And more sinister.

Unlike Mastercard’s facial payment process Clearview AI’s software operates on data it has scraped from the internet. Facebook, Twitter, your work’s ‘meet the team’ page – are scooped up by Clearview AI to build a gigantic data base, more than 20 billion pictures of faces so far, with the aim to extend this to an incredible 100 billion within months. And they are necessarily going for volume: the more data the system has, the more accurate, and certainly comprehensive the facial recognition service it can provide.

Clearview AI says it only offers its services to some governments and law enforcement agencies. And also only some companies – banking, insurance and finance, that sort of outfit. And it also made its services available to the US Government to help identify people who participated in the 2021 January 6 insurrection on the US Capitol in Washington, DC. Clearview AI has also offered its services free to the Ukrainian government to help it identify Russian soldiers and Russian assailants.

Wired Magazine recently ran its own identification test: using a free trial of a Russian facial tracking service FindClone, it took less than five minutes to identify a captured Russian soldier by matching him to his social media profile. The teenager’s profile, on Russian social network VKontakte, included his birthdate and family photos as well as his place of work as ‘polite people/war’. This is a Russian phrase referring to Russian soldiers who were active in Ukraine during the 2014 annexation of Crimea.

So how does it work?

Facial recognition is the use of deep learning algorithms to locate, analyse and match faces in photos and videos, created by training a large database of lots of photographs into those algorithms. Facial recognition is one of the most powerful uses of AI to date. It works in three steps.

Firstly, the computer is trained to find a face in a video or photograph. To detect the face the AI needs to know which pixels match the contours and characteristics of a face. This is not an easy feat. Humans detect faces by nature, it is the first thing we recognise from birth. Computers need to be told what pixel structures to look for that resemble a face.

Secondly, the algorithm maps the face. It transforms what we see as a face into a mathematical value. It measures the distance between the eyes, distance from right nostril to right ear lobe, length of the nose etc. The face becomes a big string of data. That data is a person’s unique number that represents your ‘faceprint’. Learning your face allows your phone to unlock by just looking at it.

The third thing that happens in facial recognition is the matching – the ‘recognition’ part. It can confirm a person’s identity based on their face. And it’s this final step that makes facial recognition supremely valuable.

But to be valuable the algorithm has to be reliable. The more data it is ‘fed’ – the more faces it absorbs – black, white, male, female, young, old, non-binary, tattooed – the better it becomes. Part of the problem with early versions of facial recognition were the number of ‘false positives’ produced: too many ‘matches’ were to lookalikes, not to the actual person. And early facial recognition data bases overwhelmingly featured white men hence they struggled to identify women or people of colour. Sometimes the AI did not succeed in recognising black women as even having faces. So, it failed at the first step. Moreover, facial recognition relies on contrast and that is stronger in most lighting on white faces than black faces, so there is an inherent colour bias in these algorithms.

Gender identification can also be a problem: the labelling in the AI is typically binary – male or female. However increasingly on social media accounts people are putting their own labels on how they wish to be identified so there is scope for improvement for facial recognition that is drawn from social media.

Face recognition for everyone

The information that can be pulled out of a facial recognition system can be incredibly detailed, and increasingly easy to access.

Facial recognition company PimEyes differs from Clearview in two ways: it says it does not scrape its database images from social media and it is available for anyone to use – anyone willing to pay, that is.

That means it is possible to take a photo of someone on your bus, or a person walking on the street near you – and upload it into the PimEyes search engine. It will search its database to see if there is anything on the internet about that person. According to tests run by the New York Times, it is incredibly accurate. The AI managed to identify people (the NY Times used a handful of its journalists) even when they were wearing a facemask, looking to the side, a black woman, in a group photo and even in the background of a video filmed at a party. In the near future no wedding party will be a safe space.

However, when we tried it out – we were wearing very solid face masks and our glasses – it threw up a lot of false positives. While it did accurately identify Kai, it offered up a lot of Sandra lookalikes. And these false positives are very concerning – the ‘discovered’ doppelganger can be in a very different context yet users might believe they are for real. To be fair, without the mask, the system accurately identified both of us and showed a lot of other photos we have of us on the Internet, including some we did not know about, but it was still not free of false matches.

It is an explicit term of PimEye’s service that users only upload a photo of their own face. If and how this can be enforced is a different matter.

Facial recognition for good

Let’s not forget that not all facial recognition is for nefarious or creepy purposes. Australia, much like other countries, uses SmartGate technology at its international airports to facilitate swift and secure international movements.

China has been using a biometric recognition system for about five years that allows commuters to pay for train tickets using their face. Moreover, law enforcement officers have successfully used facial recognition to identify child victims of abuse and people who have been illegally trafficked as well as identifying criminals from crime scenes.

Facial recognition is not just for humans: fish have faces too and in the Northern Territory the Department of Primary Industry used AI fish facial recognition to take the pressure off time poor agricultural researchers who would otherwise have to scan hours and hours of video footage identifying different fish species. Ditto for pigs, sharks and koalas (though who would not want to sit watching koalas all day?)

Who is watching the watchers?

As with many AI applications – the ethical issue frequently boils down to oversight: who is watching the watchers? Clearview’s algorithms are black boxes, there is no public scrutiny or any accountable audit process for how these algorithms are constructed or used. Concerns over the incidents of a high rate of false positives for people of colour and the risk of over policing certain population groups has led some jurisdictions such as San Francisco, to ban the use of facial recognition systems in public places.

Some governments have reacted vigorously to protect their citizens’ privacy. Recently a UK Government agency, the Information Commissioner’s Office, fined Clearview AI £7.5 million for harvesting the data of UK citizens from social media platforms such as Facebook and Twitter. This is against EU law. But the company has already announced it will not change its practices, as it is not currently selling its services to clients in the UK. Clearview has collected more than 20 billion images of people’s faces – that is three photos for every person on the planet. The company has announced it is on track to amass 100 billion photos, up from just 3 billion in 2020.

How can we know if our image is sitting in a facial recognition database? What other information does it holds on us? Who is accessing that information? Can we opt out? PimEyes says it will take you out – if you give it even more personal information (your driver’s licence or passport).

These systems are too powerful (and secretive) for individuals to confront. And governments acting solo will have relatively little impact up against the global nature of the internet, as the UK Clearview case demonstrates.

Image: Jan van der Wolf

Related content