Black Mirror or Real Life?
By Kara Thompson
From memory-recording devices to digital clones of people, Black Mirror is an anthology series that explores dystopian futures with not-so-far-fetched technologies that might influence our everyday lives.
It may be science fiction, but some of the technological advancements depicted in the show are starting to appear in reality. With the recent surge in popularity of A.I. in various forms — from chatbots to artificially generated artwork — Black Mirror may be closer to real life than you think. Below are just a few examples of this:
Ads for a random thing you just had a phone conversation about
The first episode of the most recent season, Joan is Awful, follows a woman who discovers a streaming show has been created based on her day-to-day life. Her phone and other devices are always “on” and listening to give the show extreme and sometimes embarrassing details.
In reality, phones don’t listen to our phone conversations — a common misconception — but they still monitor our online behavior. However, when voice assistants like Siri are activated, our phones do listen to us and can use the information gathered from our requests in conversations to target us with ads.
Companies gather and use data from social media platforms as well as internet search history to target us with ads they think we’d be interested in. In addition, location services may track users’ whereabouts without their knowledge.
Scammers and hackers seek out this private information, or it can get leaked in a data breach — potentially leading to identity theft.
Using artificial intelligence to mimic or replicate loved ones
In the Black Mirror episode “Be Right Back,’ a woman uses a service that allows her to communicate with her dead partner, eventually creating an android replica of him to live with.
Though there aren’t humanoid robots in real life, in China, tools like ChatGPT and Midjourney have been used in combination with photos and videos by funeral homes to render virtual versions of deceased people, so their loved ones can communicate with them one last time.
The Korean company DeepBrain AI has a similar service called Re;memory that uses digital clips, videos of a person, and interviews with surviving loved ones to put together a 30-minute experience with the deceased. It also offers video messages, personalized biographic videos, and memorial services with an A.I. of the deceased present.
The CEO of the company Luka took this idea even further when she developed a bot in memory of her friend Roman Mazurenko, who was hit by a car. Using a file with hundreds of text exchanges between her and Mazurenko and a neural network with 30 million lines of Russian text, the bot was able to respond to queries in a way that sounded like Mazurenko.
While this might help some people get closure, it may harm the grieving process for others. If there is an online avatar of your dead loved one, you might treat it like the real person, blurring the boundaries between human and machine.
There are also important data protection implications in these scenarios. Uploading a large amount of a person’s data to the internet to create these bots and avatars inadvertently provides businesses with valuable and intimate information about consumer behavior. Though the intention may be to create a digital replica of your loved one, in order to do this accurately, you open up an avenue for sneaky marketing opportunities.
Brain implants and Musk’s neuralink brain chip
Elon Musk is developing Neuralink, a chip embedded by a robot — which he is also developing — into people’s skulls. The goal of the chip is to eventually allow a person with paralysis to use a computer, phone, or other device with their brain alone. Threads from the chip will go straight to the brain, which will use Bluetooth to connect to nearby devices that decode brainwaves to move a mouse or type on a keyboard.
Although the U.S. Food and Drug Administration granted permission for Musk to begin clinical trials in May, others have ethical concerns about the chip. Preliminary tests included implanting the chip into monkeys and teaching them to play pong with their brains, which drew criticism from groups like the Physicians Committee for Responsible Medicine.
There are too many unanswered questions about the device, especially what happens if things go wrong. The brain is a very sensitive organ, and implanting the device could cause injury or infection. There have been no long-term studies on the effects of the chip. And it blurs the lines of privacy. Furthermore, it could be used to abuse or manipulate people or gain access to information.
This device is reminiscent of Black Mirror’s Arkangel, in which a concerned mother implants a device into her daughters’ brain that monitors what she sees and hears, and allows the mother to filter and block anything she thinks is stressful or graphic.
Generative A.I. deepfakes
In Joan is Awful, the show uses the digital likeness of celebrities to play the characters; A.I. uses their features to create them virtually, so the in-universe TV show can be released as quickly as possible.
Similarly, real-life A.I. has developed to the point where it can convincingly alter or create entirely new and fabricated videos of celebrities, politicians, and even regular people that portray them saying or doing things they never said or did. In the political world, this ability to create deepfakes can have a serious impact on free and fair elections.
In May, Public Citizen called on the Federal Election Commission (FEC) to ban the use of generative A.I. technology in campaign ads. Unfortunately, the FEC declined to do so after a 3–3 vote on Public Citizen’s petition. Public Citizen submitted an updated petition on July 13 again calling for regulations on A.I.
On July 18, a pro-Ron DeSantis super PAC released an ad featuring an A.I.-generated voice masquerading as Donald Trump. The words were taken from one of Trump’s recent Truth Social posts, but Trump never spoke them out loud. We can count on far more egregious and deceptive political deepfakes if they remain legal.
A.I. chatbots conversing with children
The Black Mirror episode Rachel, Jack, and Ashley Too depicts an A.I. mini-robot designed to mimic a pop star, whom teenager Rachel treats as her closest companion.
Snapchat, a photo-taking and messaging social media app, unveiled an “A.I. chatbot” in mid-April. Similar to ChatGPT, the Snapchat AI can answer questions and converse with users. There are even settings that allow users to name their A.I. chatbot and customize their “Bitmoji” avatar with whatever clothing, hairstyles, or looks they want.
Following its rollout, parents voiced concern about children not understanding the boundaries and differences between machines and humans, given that the A.I. is designed to appear like a genuine friend. This can lead to inappropriate interactions between the chatbot and users, who could use it to cheat on school assignments or ask for advice better suited to a therapist. Especially with children, messages from a chatbot may be taken too seriously and lead to harm or danger.
A.I. taking over real-world jobs
Increasingly in real life, technological advances are designed to make life more comfortable by taking over jobs considered undesirable or making everyday tasks quicker and easier.
Black Mirror expanded on this idea in White Christmas, in which digital clones of people’s consciousness were stored in objects to carry out tasks that the real person can’t or won’t do. For example, one character’s digital clone manages her household by adjusting the thermostat throughout the day, ordering groceries, and even making toast for her real-life self.
Back in the real world, fast food chain Wendy’s partnered with Google to create a chatbot that would take orders in a drive-thru. Once the chatbot takes the order, the order appears on screens inside the restaurant for the human workers to prepare. The hope is that the A.I. will help reduce wait times and lines in the drive-thru.
But there are many outside factors, even in a drive-thru, that the A.I. will have to adapt to, like other people talking in the car or customers changing their minds mid-order. While there are human workers on standby to step in when issues like this occur, the whole process seems inefficient and a waste of resources.
Beyond drive-thrus, digital assistants are commonplace online. These A.I.s are starting to take on human traits, in the ways they are rendered and in their ability to mimic human emotions.
Even though human users are aware that they are communicating with A.I., people respond instinctively to things that appear to be human. Because of this, people are more likely to interact with digital assistants that seem human for longer periods of time, leading to more sales and bigger profits for the companies that own them.
Soon A.I. digital assistants could start to replace jobs held by real people. After all, digital assistants don’t have to be paid, take vacation days, or work limited hours.
Forced arbitration clauses hidden in the fine print
We’ve all been there: you’re setting up an account for a new app, and you check the box saying you agree to the terms of service without reading them. When buying a new cell phone, you skim over the contract instead of being thorough. But just like what happened in Joan is Awful, ignoring the details can have profoundly negative impacts.
When Joan goes to her lawyer to sue the streaming platform over the show that closely mimics her life, she belatedly discovers that she gave the platform permission to do this when signing up: it was hidden in the fine print.
Similarly, in many types of contracts, from credit cards to retirement accounts to smartphone apps, people are bound by forced arbitration clauses. These clauses, usually buried somewhere in the document, mean that a person gives up their right to go to court if the company harms them in some way. These clauses benefit corporations while putting consumers at a disadvantage and stripping away their rights.
Arbitration firms operate outside the state and federal judicial system and work for their paying corporate clients, not the customers who’ve been wronged. They have their own rules, procedures, fees, and filing systems. The arbitration process itself is conducted behind closed doors, with no public right to access the proceedings.
Black Mirror is a cautionary tale about the dangers of technological innovation. Though most of the tech appears to be positive and helpful in the beginning, by the end of the story the risks have been revealed, and end unhappily or tragically.
If there’s a lesson to be learned, it’s that as new technologies arise, we need laws and regulations in place to protect us from their harms.