How the idea of privacy has changed in the US
PHOENIX (Stacker)– The Constitution includes 4,400 words, and not one of them is the word “privacy.” In an effort to contextualize the changes in American thinking about privacy in the digital age, Stacker investigated the way the idea of privacy has changed in the US during the last two decades using a variety of news and government sources.
The concept of privacy has been part of the American consciousness since the first steps of the colonial revolution. When the inhabitants of the 13 colonies first began organizing, they needed a way to communicate outside of the watchful eye of the British Royal Post. As a result, they began what would eventually be the foundation of the US Postal Service on the principle that no mail carrier had the right to read the letters being sent, unlike the British Post.
Since the 1700s, privacy in the US has been a hotly debated issue. Depending on the specific decision, excerpts of the First, Third, Fourth, Fifth, Ninth, and 14th amendments of the Constitution have been used to justify various levels and types of privacy. Beyond the Constitution, a variety of privacy legislation has been passed at the federal level, protecting specific types of personal information and activities.
“[American privacy legislation] does tend to be very sectoral,” said Hayley Tsukayama, the senior legislative activist for the Electronic Frontier Foundation. “[Legislators] are still really looking at data weaponization. When there’s an incident, they’re willing to react legislatively, but it’s very difficult to get them to react [with protections] for all data.”
When examining federal privacy legislation, Tsukayama’s point about sectoral regulation becomes apparent. Six laws explicitly protect various privacy rights, covering financial information collected by credit agencies, personal information collected by the government, data collected during the act of a computer software hack, information collected online from children under 13 years old, the disclosure of information sold by financial institutions, and health information possessed by a health care provider.
Each of these laws not only specifies a narrow type of data that must remain private, but also only protects that data when it’s possessed by a particular person or organization. As technology has developed by leaps and bounds since the advent of the internet, more data is produced daily than can be fathomed, reaching an estimated 463 exabytes, or 463 billion gigabytes globally each day by 2025.
Framed by historic turning points in public conversations, explore how privacy has evolved in the collective US consciousness.
On Oct. 21, 1998, the Government Paperwork Elimination Act was passed in an effort to streamline federal government processes that require significant paperwork. This was both in response to bureaucratic backups and the increased use of “electronic communications and internet usage.” Many states followed suit, and what much of the American public didn’t realize until the law took full effect, what this digitization of processes also allowed for online public records requests to become prevalent.
Therefore previously an individual would be required to go in-person to a federal building to request public records, they could now potentially complete that process entirely online from anywhere they had internet access. This shift brought the possibility of personal information disclosures to the forefront of public conversations about privacy, although they remained siloed to specific sections of Americans who dealt with public records either directly or tangentially.
Before internet-enabled devices became portable, it was more difficult to collect individual data en masse. When devices like the iPhone hit American markets, it set off a bonanza of developing technological add-ons, including phone applications. With a supercomputer in everyone’s back pocket, these applications could not only function as advertised—as a game, organizational tool, information source, or something else—but also begin hoarding data about the device owner’s usage of other applications, their internet search history on the device, location throughout the day based on cell tower pings, and much more.
This onset of data quantity prompted enough conversations about privacy that a group of US senators proposed the Personal Data Privacy and Security Act of 2007. Although the bill was never passed into law, it was intended to be a wide-reaching piece of legislation “to prevent and mitigate identity theft, to ensure privacy, to provide notice of security breaches, and to enhance criminal penalties, law enforcement assistance, and other protections against security breaches, fraudulent access, and misuse of personally identifiable information.”
Throughout the next decade, privacy-related stories popped up in the news and caused a momentary uproar, but no substantive change. In 2009, internal documents from Wal-Mart were released, indicating it had suffered a data breach from a foreign actor in 2004 and 2005. According to reports, the hackers were trying to obtain credit card information from shoppers at brick-and-mortar stores .
Just one year later, Google announced that during the creation of its new Street View tool, it had “mistakenly collected” information sent via unencrypted WiFi networks using Street View cars. The event caused the appointment of Alma Whitten as Google’s director of privacy. Perhaps one of the more disturbing data privacy stories during this time period was not a hack or mistake, but instead the intentional personalization of advertising material by Target.
Based on her buying history at Target, a Minneapolis high schooler began receiving personalized coupons in the mail for maternity and newborn items. Her father angrily complained to his local Target store, only to have his daughter announce her pregnancy soon thereafter. The idea a company may know about your most personal information without you informing it became a prescient fear in the collective American psyche.
When the breach of credit report company and data broker Equifax occurred, much of the American public was unaware of just how much data the company collected and stored. The hackers, who were allegedly members of the Chinese military, took advantage of a small flaw in the software used for credit report appeals. Equifax had neglected to update its security software as advised by the Apache Foundation.
This event not only revealed to Americans the sheer quantity of data being hoarded by Equifax, but it also highlighted the fact the company was using data to target customers and it was selling personalized data to other companies without customers’ knowledge. Massive media coverage of the Equifax scandal raged for days and, even within the last few years, experts still discuss the circumstances and repercussions of the breach.
In another instance of a private entity purposefully using supposedly secure personal data to influence individuals, the political consulting firm Cambridge Analytica leveraged vast swaths of Facebook data to sway voters in favor of then-presidential candidate Donald Trump. Soon after the scandal broke, Cambridge Analytica lost massive amounts of customers and ultimately shut down entirely.
Although Cambridge Analytica no longer uses social media data to influence elections, this scandal jump-started the beginning of Facebook’s (now Meta’s) difficulties with data privacy and user security. People have become increasingly wary of Meta because it uses advanced and potentially biased algorithms to predict the content with which users will most likely engage.
Yet, it’s difficult to escape the conglomerate’s data-hoarding clutches—Meta owns 94 companies, including Instagram, WhatsApp, and Oculus VR.
On May 24, 2022, the US Supreme Court overturned Roe v. Wade, a 50-year precedent that codified the right to receive an abortion. The case was originally decided in 1973 on the basis of a right to privacy implied in the 14th Amendment. This justification has been used for a variety of other Supreme Court rulings, including protecting the rights to contraception, and same-sex relationships and marriage.
Not only has the overturning of Roe v. Wade launched the issue of the right to privacy into conversations in newsrooms and American homes alike, it has also turned a spotlight on the role of technology companies and their obligation (or lack thereof) to protect user privacy—especially when it comes to reproductive health information. Though many people may cite the Health Insurance Portability and Accountability Act of 1996 as proof that companies must protect health data, HIPAA only applies to health information possessed by a health care provider. Because Google, Meta, and other tech companies do not provide health care services, they are under no legal obligation to protect data such as an individual’s search history, social media affiliations, or message data.
Moving forward, the future of privacy rights in the US is uncertain at best. At the time Roe v. Wade was originally decided, the issues of data tracking via the internet weren’t even plausible considerations. Whatever happens, Hayley Tsukayama said she sees hope in the awareness being spread about data privacy issues. “I think that the tide has turned against online advertising. People are getting announced [with targeted ads] again, like in the pop-ups area era, especially on mobile devices, so I do think we are seeing a shift against data brokers.”
Copyright 2022 Stacker via Gray Media Group, Inc. All rights reserved. This article has been re-published pursuant to a CC BY-NC 4.0 License.