Consent allegedly has become a cornerstone of modern privacy regimes, serving as one of the most commonly used mechanisms for legitimizing the collection and use of personal data, despite ongoing concerns about its effectiveness in truly safeguarding privacy. This paper explores the legal and historical foundations of consent within the framework of privacy law, examining its legal significance alongside its critiques and limitations.
The historical development of consent in privacy law is rooted in the evolution of the concept of privacy itself, moving from philosophical discussions about personal space to legal frameworks addressing data collection and use. With Plato introducing the concept of the “receptacle” (chora”), and Aristotle focusing on the concept of “place” (topos) we learn that, although not explicitly discussing how much space an individual need or feel comfortable with, a groundwork for understanding the relationship between an individual and their surroundings is being laid.
Although the idea of individual consent in the context of personal information can be traced to privacy concepts articulated by Samuel Warren and Louis Brandeis’s foundational 1890 article, The Right to Privacy, conceptually, privacy existed since as early as 1127 as a natural consent with early protections stemming from common law principles related to trespass and property rights. The concept of informed consent, emerged later, particularly with the rise of data collection technologies. Specifically, the need for consent in relation to recording and photographing became a concern in the early 20th Century; principles of how personal information was being handled in certain situations were established by The Privacy Act of 1974; a foundation for the idea that personal space property and information should be protected against unreasonable searches and seizures was laid by the Fourth Amendment; The Freedom of Information Act (1967) allowed individuals to request information about themselves from government agencies, promoting transparency and access to personal information; and parental consent requirement for the collection of personal information from children under 13 was established by The Children’s Online Privacy Protection Act (1998).
Despite consent playing a significant role in empowering individuals to control their personal data, its effectiveness as a sole mechanism for privacy protection is heavily debated and criticized in the context of U.S. privacy frameworks. While most Americans allegedly rely on notice-and-choice to protect their online privacy they don’t really understand the agreements they are consenting to, in addition to not bothering to read privacy notices.
What makes consent key principle in the context of U.S. privacy frameworks is the fact that it allows organizations to collect, use, store, and share an individual’s personal data under specific valid conditions. This empowers individuals to control their own data and ensure it is used in a way that aligns with their privacy expectations.
A valid consent is one were individuals:
• Have a genuine choice and control over their data.
• Give consent for a clearly defined purpose, voluntarily, without pressure or manipulation.
• Must be provided with sufficient information about the data processing activities.
• Are not in doubt about the agreement. Pre-ticked boxed or inactivity are not considered valid consent.
• Must be capable of understanding the implications of their decision.
• Should be able to withdraw their consent as easily as they gave it.
In essence valid consent empowers individuals to make informed decisions about their personal data and ensure that organizations are transparent and accountable in their data processing practices. However, even if establishing the validity of one’s consent in a more comprehensive manner becomes achievable, determining the factors that shape an individual’s expectations of privacy remains a complex and nuanced challenge.
To assess the level of informedness in informed consent, one needs to ensure that individuals can verify disclosure of relevant information, have capacity to understand, and can make a voluntary decision. This is a particularly challenging task to accomplish considering the:
• Complexity and length of privacy policies.
• Cognitive overload, “consent fatigue” and the “transparency paradox”.
• Difficulty understanding potential risks and consequences.
It is also important to be mindful of the fact that anticipating and comprehending the potential consequences of granting consent, in addition to the lack of immediate negative consequences for sharing data, can lead individuals to underestimate privacy risks. For a choice to be voluntarily, in the context of privacy frameworks, individuals must be able to make a free and uncoerced decision about how their information is collected, used, and shared. In many online scenarios, however, individuals face: (1) A "take it or leave it" proposition, where users either accept the terms and conditions or lose access to a desired service; (2) Deceptive interface designs, often referred to as "dark patterns," to steer users towards providing consent, whether intentionally or unintentionally; and (3) phenomenon of the privacy paradox highlights the discrepancy between people's stated privacy concerns and their actual behaviors, where individuals often willingly share personal information despite expressing strong privacy preferences.
The above-listed critiques highlight the need for a re-evaluation of consent’s effectiveness as the primary safeguard for individual privacy. What makes current Privacy Policy of consent meaningless is the fact that the majority of people don’t understand it, meaning that they don’t know what they are consenting to, according to Helen Nissenbaum. Because very little thought is allegedly given to whether a data flow is good for user and
society, there is a real need for creative philosophical research thinking around productive data flow within various domain.
Consent mechanisms, like those for data collection, often fall short in protecting users due to several key factors:
• Information asymmetry and complexity.
• Unrealistic expectations of user behavior.
• Lack of genuine choice and power imbalances.
• Policy updates and scope creep.
• Data exploitation.
• Evolving data practices.
These factors highlight that simply obtaining consent is insufficient to adequately protect user’s privacy and interests in the digital realm. Stronger legal protection, such as data minimizations principles and regulatory oversight, allegedly are needed to complement or even replace the current notice-and consent frameworks. Some experts propose models like the fiduciary model, which impose a duty on company to act in the best interest of users with regard to their data. While the fiduciary model may appear to offer a workable solution, it is important to acknowledge its potential shortcomings - particularly issued related to informational ambiguity, difficulties in determining a user’s best interests, and insufficient safeguards. The examples presented above illustrate that they are indeed several models and frameworks regarded as more effective than relying solely on consent to protect privacy.
The “Privacy by Desing” framework, for example, emphasizes integrating privacy protections into systems and processes from the initial design phase. The potential of its effectiveness lies on the preventative measures being implemented instead of reacting to breaches after they occur; highest level of privacy protection being automatically applied unless the user explicitly adjusts settings; reduced overall risk by only collecting essential data; in addition to robust security measures being in place throughout the entire data lifecycle.
The framework of “Contextual Integrity” shifts the focus from individual control to the appropriateness of information flow within specific contexts and their associated social norms. According to this view, which shifts away from the Fair Information Practice Principles, privacy is about maintaining the appropriate flow of information, other than restricting all information sharing.
The “Privacy Enhancing Technologies” tools, and practices are designed to safeguard personal data during storage, processing, and transmission by removing or alter personal identifiers to prevent or minimize re-identification; allowing data to be processed while remaining encrypted; and enabling multiple parties to collaboratively analyze data without revealing their individual datasets.
These models and frameworks offer a more comprehensive approach to privacy protection by shifting the burden from individuals to organizations and incorporating technical and design-based safeguards. They also help organizations meet legal requirements, build consumer trust, and reduce the risks associated with data breaches.
I find that my personal experience with digital consent (clicking “I agree”, managing privacy settings, or deciding to share personal data) is comparable to those of others. Although, perhaps, because of my legal training, I tend to approach the topic with greater acceptance. This perspective may be influenced by my upbringing in a regime where surveillance was pervasive, and we were taught that anything done in private should be acceptable if made public.
I believe that in the digital age, the line between public and private life has become increasingly blurred. Rather than focusing solely on drafting privacy policies, we should engage in deeper conversations about the extent of our vulnerability in the digital realm - recognizing these vulnerabilities as inherent to the digital technologies we rely on, rather than solely as matters of personal responsibility.
Most often than not, I click “agree” when I am interested in a product of service, without giving much though to adjusting every available privacy setting. Still, I like to believe that I have the power to decide what personal data I share and what I withhold. If I am interested in the product or service, I don’t worry too much about taking advantage of all “privacy settings”. I do however want to think that I have the power to decide what personal data do I share, and what I don’t. Although, in reality, this is very difficult to achieve.
In my view, applying a legitimate interest balancing approach may offer a meaningful and sustainable solution to the complexities of data processing across various contexts - determining the scope of consent and the role of technology.
The continued use of consent as a mere procedural formality - rather than as a genuine expression of individual choice - urgently requires rethinking. At the heart of the issue, as in many others, lies lack of meaningful communication. Initiating the right kind of dialogue could pave the way for more effective, sustainable, and human-centered solutions.