How much of our personal information are we willing to give up for the promise of a safer online experience? We take a look as part of The Drum’s Data Deep Dive.
In basic terms, digital privacy relates to any rights that we have in the usage of our personal data and information. Online safety refers to how our data is protected and potentially used if need be. It can be a little confusing, but the privacy paradox isn’t something that everyday people are taking lightly any more – you just have to look at the backlash that Facebook and its platforms WhatsApp and Instagram have faced.
“The idea that we ‘exchange’ data for anything misconceives the nature of data itself, because data is a collection of ones and zeroes that can easily be copied,” says Doc Searls, who runs ProjectVRM at Harvard’s Berkman Center for Internet and Society. He quotes Wired founding editor Kevin Kelly’s quip that “the internet is the world’s largest copy machine.”
“Economically speaking, data is a public good,” says Searls. “This inconveniences claims that it ought to be property and that we can exchange it for something else.”
Searls notes that the system has been broken from the start. “It presumes zero agency for individuals toward guarding their own privacy online, or to be able to assert that agency at scale across all the organizations they engage.”
He adds: “Life was no different in the natural world before we civilized it, starting with the privacy technologies we called clothing and shelter, thousands of years ago.
“Meanwhile, the digital world is only decades old, and we don’t yet have the equivalents of clothing and shelter there, beyond the choice to either stay offline completely (with encrypted storage) or to send messages in encrypted form. Both those approaches are far more useful to the wizards among us than to the rest of us muggles.”
So what are the most contentious privacy issues of our time?
1. Facial recognition
Social media content moderation and censorship can be a divisive topic. Back in May, the online safety bill, which hands Ofcom the power to punish social networks that fail to remove ‘lawful but harmful’ content, was introduced to the UK. It was welcomed by many child safety but condemned by civil liberty organizations.
At the beginning of November 2021, Facebook’s new parent company Meta said it will no longer use facial-recognition software to identify faces in photographs and videos after growing concerns around the technology. If users opted into the software, they would be notified if a fellow user had posted an image or video with them in it.
While the technology can help fraud and impersonation, several complaints have been filed in recent years accusing the company of creating and storing scans of faces without permission.
“We still see facial recognition technology as a powerful tool, for example for people needing to verify their identity or to prevent fraud and impersonation. We believe facial recognition can help for products like these with privacy, transparency and control in place, so you decide if and how your face is used,” says Jerome Pesenti, vice-president of artificial intelligence at Facebook.
“But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole.”
On the challenges that social media companies face, Jim Fournier, chief executive office and founder of Tru Social Inc, notes that there are two huge ones. “The first one is that the targeted advertising business model itself is based on tracking and profiling. This is fundamentally at odds with privacy. The second being that social media is based on a central algorithm requiring centralized moderation, which is by definition also centralized censorship.”
2. Covid-19 contact tracing apps
Technology has played a massive part in the ongoing recovery from the global pandemic, especially in the health, social and business sectors. In the UK, the NHS Covid-19 app holds details of relevant test results (for 14 days) and tells you if you’ve been in close contact with someone who has since tested positive.
Many people have queried the effectiveness of the app and there are huge flaws – especially when you consider more vulnerable groups such as the elderly or unhoused people who may not have access to smartphones.
In the US, actually passing any legislation looks unlikely. “This may not be terrible given how poorly most in Congress understand the problem. Applying existing anti-trust laws would be a good starting point,” notes Fournier.
3. Apple tool to spot images of child sexual abuse
Back in August, Apple announced that it was introducing new safety measures focused on finding child sexual abuse material (CSAM) on US customers’ devices. In a statement, the tech giant said: “Our goal is to create technology that empowers people and enriches their lives – while helping them stay safe. We want to help protect children from predators who use communication tools to recruit and exploit them.”
The development was met with mixed reviews, with some querying privacy concerns stating that the technology could be used by authoritarian governments to spy on citizens.
In lieu of the concerns, Apple backtracked and on September 3 announced: “Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
Searls concludes that Apple “markets itself as uncompromising when in fact it compromises plenty.”