We all have basic recognition skills. In fact, these skills are very necessary to our success as human beings. For example, you should be able to tell your brother from your Great Dane, even if they are roughly the same size. When you’re making this differentiation, you would probably look for features like ‘walks on two legs’ vs ‘walks on four legs’ or ‘wears clothing’ vs ‘covered in fur’.
You might assume that a scientist teaching a camera-equipped computer to tell the difference between your brother and your Great Dane would teach the computer to look for the same things that you look for, but computers see the world in fundamentally different ways than you do. As humans, we are aware of certain characteristics of other people, animals, or objects, and then we are aware of the way we feel around those things.
Computers think only in cold, hard numbers. They are aware of ones and zeroes and very long and complex patterns of ones and zeros. So, then, how can computers tell the difference between your brother and your Great Dane? Or between your brother and your dad? Or between your dad and the lady behind him in line to check-out at the grocery store?
Computers assign a numerical value to each pixel in an image, which tells them how light or dark it is, with white being the lightest and black being the darkest. Then, they effectively draw arrows from lighter pixels to darker pixels, in order to tell in which directions the image is getting darker. This is called a ‘gradient’, and it’s a thing from Calculus that’s relatively easy for humans to draw, but considerably more difficult for humans to use in calculations to come up with useful results. Luckily, computers don’t struggle at all with math, since they think entirely in numbers anyway (assuming that the humans who programmed them in the first place taught them correct math!)
So the computer now has a picture of your brother that it has turned into a pattern of arrows, and a picture of your Great Dane that it has turned into a different pattern of arrows— how does it know that this one is ‘brother’ and this one ‘Great Dane’? The first time, the computer only knows if a human tells it. But after that, the computer can compare the gradient that it makes from any picture to the examples that it has of ‘brother’ and ‘Great Dane’ and decide if the new gradient matches the one it already has. In the early stages of a machine learning program, the human will then tell the computer if it is correct, and continue feeding the computer more pictures of ‘brother’, ‘Great Dane’, ‘dad’,‘lady from grocery store’, or who or whatever else they want the computer to understand, until the computer can easily recognize any of these people, animals, or objects.
Because computers recognize people, animals, and objects in a fundamentally different way than humans recognize people, animals, and objects, it is possible for computers to recognize people in situations where humans would never have managed (such as when the photo is pixelated), but it is also possible for computers to be fooled by images that would never have fooled a human. For example, Jeff Clune ran a study in 2015 in which he and his team generated random images, and later images that look to the human eye like old television static, and found that advanced photo recognition AI thought that these images were very concrete things, such as a school bus, a peacock, or a starfish, with over 99% certainty. Clune and his team believe that computers confuse these images because the patterns of light and dark or of certain colored pixels next to one another in these random or static images closely resemble those in images of things which humans would also recognize. They also note that it is possible that masks of certain constructions could fool AI into believing that a person has a different face, or that no face is present at all. 
Probably the most significant use of AI facial recognition to date is in China. Citizens of China can already use facial recognition AI to pay for food and items, withdraw money from ATMs, and check in at airports. The Chinese government has already begun an extensive project to create an “omnipresent, fully networked, always working and fully controllable” camera system by 2020.  They already have “nearly two hundred million public-surveillance cameras”. They can recognize cars, bicycles, and other objects in videos in addition to faces. In essence, this makes the videos searchable. In other words, government-controlled facial recognition is no longer a futuristic possibility in China; it is an everyday reality.
Of course, there are pros and cons to this reality. One woman, Mao Ya, enjoys that she can unlock her house just by looking into a camera when her hands are full of groceries. Another Chinese citizen, a “former magazine editor who was ousted by the government” worries that there are eyes on him at all times, no matter where he goes. These concerns seem to be valid, especially in light of China’s intense and targeted monitoring of ethnic minorities and interest in silencing dissenters.
The Chinese government has also been using facial recognition AI for purposes that seem a little funny at first glance, like keeping citizens from using too much toilet paper in public restrooms and shaming jaywalkers. However, these applications are more serious when you consider the detailed level of monitoring that they require.
In the beginning, the training images for facial recognition AI came mostly from photos of celebrities, which makes a certain kind of sense because there are an awful lot of celebrity images available. However, the celebrity photos do pose a serious problem in that the facial recognition algorithms learn a bias where they struggle to recognize the faces of ethnic minorities, particularly black women.
In their attempts to correct this problem and to make facial recognition systems more accurate and reliable, AI developers have taken photos from various internet locations without the consent of the photographers or the subjects of the photos. Most notably, IBM took a set of “nearly a million photos” from Flickr in order to offer them to various researchers and companies for the progression of facial recognition algorithms. IBM did not notify the photographers or the subjects, which brings up a large number of ethical issues. 
Not everyone wants to be a part of progressing facial recognition technology, for instance. There are people who believe that the likely abuses of facial recognition AI far outweigh the likely benefits— people who don’t believe that we as a society should continue to develop this technology. Then there are people who do think that facial recognition AI is a good idea, but the vast majority of these people would still prefer that a company like IBM ask their permission before using their pictures.
The short answer is no, particularly if your account is set to public. Nothing that you put on the internet is really ‘safe’, because regardless of the ethics of the companies involved, there are always hackers and— in the case that your account is public, random non-hacking people— who could gain access to your information; i.e. nothing that you really want to keep to yourself should ever be online in any capacity.
But what are companies like Instagram, Facebook, Apple, and Google doing with the photos that you do choose to store or share online? The short answer is: it depends on the company, but there are definitely similarities.
Facebook famously has a tagging algorithm that can tag your friends and family in the photos that you post, and it does this even if you weren’t really planning on tagging anyone. Sometimes, it can even be a little difficult to un-tag people once Facebook has tagged them.
What a lot of people don’t realize about Facebook’s tagging algorithm is that it is as strong as it is because the majority of regular, everyday Facebook users have helped to train it. Every time you tag a friend or answer a tagging suggestion question (Is this Bob?), Facebook’s tagging algorithm stores that information and uses it to more accurately tag faces in the future. If Bob has a lot of Facebook friends (and we’ll give him the benefit of the doubt here), he probably gets tagged a lot. Chances are that Facebook has a really good idea of which pictures do and do not contain Bob. Assuming they both have a Facebook account, Facebook can probably tell the difference between your brother and your Great Dane about as well as you can.
As you probably know, Facebook owns Instagram. While Instagram does not auto tag, and instead leaves you to tag your friends yourself (and in general has a lot less bad press about security and privacy), it seems plausible that the people you tag on Instagram go into the same database as the people you tag on Facebook, leaving the two social media platforms with similar issues.
Apple also has a facial recognition AI system. In fact, they now have two facial recognition systems: the system on the newest iPhones that allows you into your phone as a type of biometric security and the system known as ‘People’ in Photos, which can identity your friends and family after you have named a couple pictures of a person, much like the Facebook tagging algorithm. Ostensibly, the ‘People’ feature is local to your computer (it’s private to you, and does not require internet access), but it moves into the cloud if you pay for iCloud photo storage.
Apple’s Photos algorithm is super helpful if you are looking for photos of a specific friend or family member, because it collects them all in one place.However, you are feeding data about your friends and family into an AI system, most likely without their express permission to do so. Something to think about.
Google has a very similar system, and like Apple, Google tends to have access to the entire photo library of its users (the users being all of the people who subscribe to Google Photos, not all of the people who use Google Search). Google also boasts the ability to find things and places. In fact, Google offers as examples that a user could search their library for “a wedding [they] attended last summer, [their] best friend, a pet, [or their] favorite city.”
Google also asks for user input to improve its AI, making the system very similar to Apple and Facebook’s systems. As long as their only purpose is tagging friends and managing photo libraries, these AI systems are harmless and even super helpful, but there is really no guarantee that that is all they will ever be used for— i.e. if you or any of your friends or family use these systems (and even if you don’t, you almost certainly have friends or family who do), your name and face are trained into powerful AI databases, and it is absolutely possible that these databases will be a part of surveillance systems or systems to some other purpose in the future.
Facial recognition AI has a lot of potential for good. Assuming that the governments and/or companies with access to facial recognition databases don’t abuse that power, facial recognition could provide better security and safety, especially in cities where huge volumes of people frequent the same limited amount of places on a daily basis. It could also help predict diseases even before the victims notice symptoms. It’s already helping farmers monitor their cows’ behaviors and needs.
While it’s not facial recognition necessarily, there is recognition AI involved in self-driving cars, which are covered in cameras and use AI to know when they should stop, turn, slow down, etc.
Essentially everything we use today could incorporate some form of AI technology. From an automatic table saw that will automatically shut off if it connects with flesh to a home security system that uses home surveilance and AI to recognize patterns in frequent passerbys or other suspicious activity. Every tech trend suggests that AI can be applied to any industry.
AI recognition has the potential to go wrong in the case that people abuse the power that comes with it, to know where anyone is and what they are doing at any time. People worry that controlling governments could monitor people in ways beyond apprehending criminals, doing things such as monitoring frequent dining or shopping locations and cracking down against cultures that go against the mission of the government.
There is also a lot of potential for racial profiling in AI technology. While a computer does not receive or execute biases in the same ways that humans do, computers can certainly receive and execute the biases of the people who train them. For example, if there were an AI program trained to recognize potential criminals, and the training photos came from inner city mugshots which historically include many people of color and very few white people, then that AI program would most likely suspect mainly people of color of being criminals, carrying forward an existing bias, but with a baseline (incorrect) assumption that computers are unbiased, making the entire process more harmful than if humans made the calls.
Unless we reach an AI vs humanity, computers-don’t-listen-to-us-anymore, dystopian universe sort of a situation, AI facial recognition is on some level a tool like any other tool. By itself, AI facial recognition is not causing a problem— just like a gun sitting in a gun safe is not going to shoot anyone— but in the hands of the wrong people, corporations, or governments, AI facial recognition is a terrifying concept. While it is ultimately just a tool, AI facial recognition is an extremely powerful tool, and a tool that could very easily receive decision-making power in cases of matching security footage to a name, of deciding whether a person is innocent or guilty.
Currently, there are very few laws regulating AI and deciding who can build it, who can use it, and for what purposes. As long as this remains the case, we should be very worried about the future of AI facial recognition. It’s too powerful a tool to be out in the world with little to no regulation.
Obviously, we can’t un-invent AI, and we probably wouldn’t want to. The next best thing we can do to protect ourselves and our societies from rampant AI technology is to take an active part in the law-making process: to encourage a conversation about AI and to ask and petition candidates and members of the government to address the issue now, before AI systems that will cause more trouble than good can establish themselves as the new normal.