Much has been said about freedom of expression in the recent years, particularly online where freedom of expression reigns supreme and anyone who wishes to debate a particular topic being discussed should do so rationally. But that’s all theory. In practice, debates descend down to outright name calling and furious exchange of words. While certainly entertaining, it does little to advance our understanding or opinions on the subject of discussion.
All commotion and nonsense seems to prevail in the garb of freedom of expression. This leads to the vital question – does freedom of expression mean no restrictions on the way we express ourselves, or is there a requirement for the state to provide an environment wherein we can freely express ourselves?
This conundrum has been labelled as the defensive view on freedom of speech versus the empowering view on the freedom of speech. The defensive approach to speech rights originates from the works of classical liberal scholars like John Locke and John Stuart Mill to neo-liberals such as Friedrich Hayek, Robert Nozick, etc. Liberty, according to this view, is defined as the absence of coercion by government and by others. The right of the government, is restricted to maintaining the individuals’ private spaces so they can pursue their goals with minimal intervention by state or by others and exercise their freedom to express themselves. They believed that the decisions made by plurality of individuals guided by their free-will and market forces are superior to decisions made by the government because the “state has no legitimate reason to interfere with individual decision makers”. The role of the state is relegated to ensuring the conditions necessary to meet this competitive market economy are met. They extended this idea, coupled with their belief in the mechanisms of laissez-faire, to the freedom of speech to having a clear demarcation between the state and the private sphere when it comes to “prohibiting government interference with expression and relying on the public’s self-restraint in matters of non-governmental censorship could secure freedom of speech”. It is argued that “freedom of speech is best served by market mechanisms that are identified with a private sphere of public opinion formation”. However, they do not take into account the ability of competitive markets to coerce and curb the freedom of speech. It is inconsequential as long as the coercion is not perpetrated by the state.
An opposing view to that of the defensive approach to freedom of speech is the empowering approach that draws its motivation for free speech from Participatory Democratic Theory, as espoused by writers like T.H. Green, John Dewey, Benjamin Barber, etc. They viewed the citizen and the government as not separate from each other but theoretically coterminous and that it was the right of the state to not only protect its citizens from coercion, but also provide the necessary conditions that would enable the citizens to “collectively examine, make and enact social decisions to benefit the common good ”. Communication is assigned a central role in this process in the sense that it acts as a facilitator of social enquiry and mediation that “generate the political and social knowledge necessary to legitimise self-governance”. The empowering approach is more pragmatic than the defensive approach in the sense that it recognises that coercion can emanate from vested interests apart from the state. Thus, the government here plays a much more crucial role than the defensive approach in the sense that it recognizes liberty as the “opportunity to act”and develops and maintains the procedures, processes and institutions that provide this opportunity. This ensures that all communicative spaces are free from coercive forces of any kind thereby enabling legitimate public decision-making.
Alexander Meikeljohn once said that the purpose of free speech was not that “not everyone gets to speak but that everything worth saying gets said”. While the defensive approach ensures that everyone gets to speak, it runs the risk that noteworthy points might drown in the noise. In such a scenario, the empowering approach is useful in the sense that the onus lies on the state to provide the right set of social conditions to ensure freedom of speech. However, this also means that the concept and practice of freedom of speech might become lopsided since the decision of what’s best for the public interest will be decided by the few and may not be representative of the best interest of the masses. What is essential, in such a scenario, is to strike a balance between the two approaches. A plausible solution would be to outline the parameters for public interest with respect to freedom of speech and decide on the merit of each parameter, which approach would suit best.
One of the prime issues that crops up frequently in data protection conferences or forums is that on the immanent complexity of legalese that prevents users from understanding the Terms & Conditions* for which they’re signing up when they join a social network, use an app or visit a website. This led to either the Users being disinterested in understanding the T&Cs. This had a huge impact on Data Protection Policies since Users were ignorant about their data was being used and more importantly, there was no clarity on who actually “owned” the data.
While it was clear that a new framework to make them aware or in the very least get them interested to explore further, the prime issue was “How”. In this regard, one of the speakers showed a video of The Scott Trust, which is a part of The Guardian Group. Unfortunately, I am unable to find the video online at the moment (Will share the URL once I do). The video informed the viewer that all data being collected in this site was to generate relevant content for the User without sharing any information with any third party that might be affiliated with The Guardian. It was quite well made and it did a good job in summarising what The Guardian’s T&Cs were in under 2 minutes (I think).
However, I see two problems with this approach. First off, and this was highlighted by another speaker in the panel, that most of the Users won’t bother to check the video or even access the page containing the video. The second problem that I felt – and this is a major concern – is that if the User did in fact, watch this video, he is likely to misconstrue the oversimplified version of the Terms of Service mentioned in the video for ALL Terms & Conditions, when in fact, it’s just a summary. That, might be a problem.
Reasons for Complexity
There might be two reasons (readers are encouraged to suggest more) for the complexity of legal articles such as the Terms of Service, EULAs, etc. –
- The organisations are trying really hard to be very clear about where they stand regarding the terms of usage of their products/services. Whether one agrees with these terms is a matter of conjecture – one that demands resolution.
- Organisations know that a lot of information can be shrouded under Terms of Service since Users don’t usually read them and can create deceptive Terms of Service/EULAs, etc. Contrary to popular belief, it is not the big companies who usually engage in such practices (mostly because they know they are always under the scanner of the public and cannot get away with it).
Either way, it is of supreme importance that the Users be made aware of the implications of the Terms and Conditions. Also, it is important for the organisations to realise that it is in their benefits for Users to realise their Terms of Service since the absence of complexity makes way for a healthier and more trustworthy relationship between the institution/s and the user/s. Case in point – When Amazon decided to pull out George Orwell’s 1984 from its shelves owing to copyright infringement issues (it turns out it had been doing the same for other books too, such as Animal Farm, Twilight, books by Ayn Rand and some books from the Harry Potter series), possibly the most exemplary of comments from the list of user complaints highlighting the confusion arising out of such ambiguities is an exchange between two users –
User#1 : “What ticked me off is that I got a refund out of the blue and my book just disappeared out of my archive. I emailed Amazon for an answer as to what was going on and they said there was a “problem” with the book, nothing more specific. I’m sorry, when you delete my private property – refund or not – without my permission, I expect a better explanation than that. And, BTW – Pirated books showing up on Amazon – not MY problem – hire more people to check them BEFORE you sell them to me. I call BS on the “sometimes publishers pull their titles” lame excuse someone else got too.
I like the B&N analogy above – but I liken it to a B&N clerk coming to my house when I’m not home, taking a book I bought from then from my bookshelf and leaving cash in its place. It’s a violation of my property and this is a perfect example of why people (rightly) hate DRM.”
User#2 (in response to User#1) : “You don’t buy a Kindle book from Amazon. You buy a license to download it. I will bet that if you read all the fine print in the terms of service, you will see that Amazon says they can remove (or rescind, or revoke, or whatever the legal term is) the license if the book in question has been put up in violation of the copyright.
If you buy something that turns out to be stolen, it can be confiscated and returned to the legal owner with no compensation to you. You could try to get your money back from the vendor, but that would be something you would have to pursue yourself; the police wouldn’t do anything about it.
Consider how many posts there have been here where people rant and rave because Amazon doesn’t do enough to help owners of lost or stolen Kindles get them back. Now there are complaints because Amazon does make the effort to get stolen (and that’s what unauthorized books are) books “returned” to the copyright holders. Talk about a no-win situation.”
This wasn’t helped further when Drew Herdner, the spokesperson for Amazon gave the following statement to the reporters – “We are changing our systems so that in the future we will not remove books from customers’ devices in these circumstances.“
The underlined phrase (underlined by me for emphasis) is further encouraging complexity since, it can technically mean that Amazon can remove any book under a different circumstance/s. Alongside, intangible reputation damages, Amazon suffered from real economic damages, having to pay a plaintiff’s lawyer, $150,000 and an undisclosed sum of money to the plaintiff and the co-plaintiff.
What’s clear is that this issue (and there are other examples) arose from the fact that there was some complexity in the terms of engagement between Amazon and its users, which led to a massive financial and reputation damage for the company.
Now that the need for a clearer set of Terms & Conditions is acknowledged among the Users as well as the Organisations, the prime question is – HOW? In other words, how do we implement measures to remove the complexity arising from the Terms & Conditions?
The hard answer to this question is – There’s no panacea for this issue.
That being said, we can always attempt to minimize the issues arising out of such complexities by trying to make it more succinct and understanding of Users’ hopes and expectations. For that, one solution that I have in mind is what I would call a Terms of Service Commons, or TOS Commons for short. The main aim of ToS Commons would be to create a middle ground between Organisations and Users, by attempting the following –
- Develop a “generic Terms of Service (gToS)”, which would encapsulate a general “philosophy”, that will be applicable for all websites. A good analogy might be to think in terms of generic features of all social media websites. For example, Twitter, Facebook and LinkedIn are all social media websites with different models and features. Yet, they offer certain common functionalities, so to speak. Such as, private messaging, referring to one’s contacts by hyperlinks, sharing of photographs, etc. Thus, if one were to implement a gToS, a starting point would be to understand the common features that describe all social media websites (Please take note that I am using social media as only an example and gToS could vary per the type of website/s). This can termed a “Read Once – Apply Always” type of document.
- Anything that is specific to these websites’ business model (for example, Sponsored Tweets are specific to Twitter) can be covered by “specific Terms of Service (sToS)”.
In all of this, I believe an interesting approach would be a bottom-up one to understand the expectations of the User base. Wikipedia is a good inspiration on how one could build a huge corpus, which could reflect the user knowledge base. I am not saying all User recommendations need be accepted, but if a certain idea attains a “critical mass”, it should imbibed in the larger “philosophy” of gToS.
Of course, there are challenges with implementing this approach. A couple of them that I can think are (Of course, readers are encouraged to advise newer points, one of the prime aims of the ToS Commons in the first place!) –
- Getting a common consent among the masses on a particular contentious issue is quite difficult. A certain Standard Operating Procedure on mediation needs to be developed to facilitate a smooth dispute resolution.
- One of the main aims of the gToS will be to shorten a overall Terms of Service one reads on a website. However, as I mentioned previously, the gToS itself will depend a lot on the “type” of the website”. This is where it might get tricky – Categorisation of websites. Thus, is Facebook a social media site (where people communicate with their friends) or an e-commerce site (where people buy stuff)? A possible solution would be to develop the ToS Commons in the categories of “function”. That is, Messaging, Photo Sharing, E-Commerce (a very broad term that would need to be defined VERY clearly).
Last, but not the least, the prime issue that arises is that of legitimacy. In other words, how is such an initiative likely to gain the trust and/or acceptance of everyone? The most plausible (although not completely perfect) answer to this might be the creation of a self-regulatory consortium consisting of all organisations conducting any form of interaction online, economic or otherwise. While this has its downsides (The consortium becoming lopsided in favour of the bigger players, few organisations making rules for the rest of the world, etc.), it has one major upside – It might be the first time that a (somewhat) concrete Bill of Rights for the internet could be created and implemented through a gToS.
As a concluding remark, I would like to point out that while insufficient (I don’t think it’s ever possible to create a “sufficient” Terms of Service, since new terms will create newer issues, so on and so forth), I have attempted to tackle this issue from an institutional perspective. There is another side to this issue – The Users’. Human beings are subject to cognitive limitations and numerous issues arise out of it even within the existing set of frameworks. To address this, ToS Commons can be extended to understand and address the limitations arising out cognitive limitations in Users as well by the analyses of their feedback. In a nutshell, there is no one way to simplify the complexity of legalese since it can be considered what one could term a “necessary evil”. Any attempt to simplify the complexity could end up oversimplifying the Terms of Service (as is the case with the Guardian). Therefore, a balance needs to be maintained. ToS Commons could be an interesting, but a crucial step in minimising this quandary.
*For the purposes of this article, the words “Terms & Conditions”, “Terms of Service”, “EULAs”, “Terms of Usage” will be used interchangeably. But all these terms, whether used individually or in groups, will invariably act as the umbrella term/s for all the aforementioned legal instruments.
All thoughts and opinions expressed in this post are solely my own and do not express the views of opinions of any of the organisations with which I may be associated.