Category Archives: Society

On development

The sharing economy holds much promise. And if this article is to be believed, the lack of a centralised electrical infrastructure (among other things) in general within the African continent might prove to be a blessing in disguise. It is likely to usher in a “Third Industrial Revolution” that can overcome the growth potential of developed economies. And almost as a complement to the aforementioned article, a different report on FT lamented the strain on the power grid in London, and The UK, in general, owing to development. The rise of a new world order, arising out of either innovative ways of interactions, or greater cooperation (economic or otherwise) between underdeveloped or developing countries while the simultaneous pressure on developed countries to maintain the high standards of living they have set for themselves, primarily through achievement of great economies of scale, is the subject of fascination of theorists and practitioners alike. Visuals, showcasing development (and hope) of underdeveloped/developing countries is reflected both in bar graphs and photographs (such as the one below).

USAID

(Source: USAID)

While all that sounds impressive, there is a lacuna in the narratives. They do not delve into the meaning of development. In a way for much like other big and nice sounding words such as “human rights”, “freedom of expression”, etc., “development” tends to encapsulate a certain positivity in spirit sans precision in meaning. Indeed, if one even searches so much as the dictionary definition of the word, one is stuck in recursion (Oxford Dictionary defines “development” as the “process of developing or being developed”) or vagueness of defining terms (Merriam-Webster defines “development as ”the act or process of growing or causing something to grow or become larger or more advanced” without clarifying larger or advanced with respect to who, when or what). But of course, we do all have a fair idea of what development looks like. Visual culture, once again, plays a huge role in portraying a certain worldview of development. For example, the picture below is one of the many images that come to mind when thinking of development.

Harare_Skyline.jpg

(Source: Wikipedia)

Interestingly, the skyline shown in the picture is that of Harare, the capital city of Zimbabwe. At an exchange rate of Z$175 quadrillion for US$5 (at the time of writing), the country is hardly an ideal model for development. However, even if we disregard visual clues, the verbal ideas would hover around something more than just economic prosperity. It would also include general well-being such as adequate healthcare, low crime rate, sound public services…one gets a general idea. Indeed, these general ideas resonate in a recent speech by Helen Clark, the Administrator of UNDP (and former PM of New Zealand), when she professed the role of the organisation as a development actor was to “tackle poverty, improve governance and support the rule of law, prevent conflict and support recovery from it, and reduce disaster risk“. But the question remains – Can there be a single philosophical idea than encapsulates these different ideas and perhaps give a certain precision to its meaning? The idea here is not so much to come up with a new definition of development so as to attempt to define the philosophical underpinnings of what the term entails in our collective consciousness.

“Development” is the practice of establishing the certainty of man-made outcomes in the face of vagaries of nature.

First and foremost, it’s a practice, which means it’s a process, (a never ending one at that). Whether this practice makes perfect is a matter of conjecture. I will explain this in a while in a later section. Secondly, the aim of this practice is to tame the unpredictability of nature (the vagaries of nature). But this opens up more questions than it answers. About what vagaries are we talking here? And how does one (i.e. a society or a country) achieve this certainty?

When talking about “vagaries of nature”, it means more than natural calamities such as earthquakes, storms or lightning (although they too form the constituents of the phrase). It also means vagaries of human behaviour, considering human behaviour itself is a product of nature. Indeed, when Thomas Hobbes lamented the state of human life as “solitary, poore, nasty, brutish and short” in Leviathan, he was referring to these dire outcomes as the result of the vagaries of the nature of man. Leviathan laid the foundations for the justification of a strong sovereign by postulating that a strong government dissipates the state of nature through enforcement of social contracts, which while curtailing some aspects of complete individual liberty, brings about peace, stability, and protection from the vagaries emanating from the nature of man. In other words, Hobbes through Leviathan promulgates the mitigation of adverse results emanating out of the vagaries of the nature of man through mediating them through a strong government. However, that raises the question – to what extent? Entire subjects of study (e.g. Macroeconomics) attempt to answer this question and will continue to do so. However, a strong government is not necessarily the only way to it. Some ardent proponents of technology profess the use of technology for mediation since technology is agnostic of man’s perverse desires – an idea subjects such as Social Construction of Technology negate entirely. Various political ideologies also debate the nature of the state and the extent to which it should act as a mediator of the nature of man. Thus, there can be various means to achieve the same end. However, I digress. The point I am trying to make is, a more developed country is more adept at mediating the adverse outcomes emanating from the vagaries of the nature of man.

As stated previously, the term “vagaries of nature” also denotes natural calamities. A more developed society is more likely to develop the means to protect its people from natural calamities. Of course, we are still a long way to go from being able to save a lot of lives from utterly devastating earthquakes or cyclones, but a more developed society is more likely to reduce the number of casualties through both ex-ante measures (such as sound infrastructural design that can somewhat resist the natural calamities) as well as ex-post measures (such as quicker and far reaching medical support in case of a calamity). However, it’s not just in case of extremities, but in case of daily lives too, that developed societies are more likely to establish a certain consistency. For example, in cold areas, being able to provide for consistent heating and likewise in hot areas, being able to provide for air condition. While it may seem obvious, what needs to be understood here is not the heating or the air conditioning itself, for that is available in almost every developing society too. But what is important is the consistency with which they are provided thereby shielding the residents from the vagaries of nature and providing a typical certainty to the outcomes.

This further begets the question – How is this certainty achieved? As mentioned in the beginning, it is a practice, a never ending one, for there is always something that’s left to be desired. As the well-known adage goes – practice makes perfect, except it is difficult to determine how perfection looks like. Everyone just has an idea or a visual image, much like the pictures shown above. Accordingly, metrics, some qualitative and some quantitative, are developed accordingly to be able to measure success. GDP, Per Capita Income, and other such indicators are examples of metrics that attempt to convey the amount of perfection in development that a particular society has reached. In this instance, development (rather, the idea of development) is both the result as well as the cause of the exercise in building what constitutes perfection in development. In other words, it is an exercise in both understanding and creating the idea of development. Take, for example, the GDP of a country – there is no one way of measuring GDP and it all depends on the what constitutes the “basket of goods” that results in the output that is measured. In such a scenario, development is more about politics and building that image than actual numbers. It tends to create the perception of certainty in the face of uncertainty brought about by nature.

However, there are instances in which certainty in outcomes in actual than just perceptional. This is primarily the result of analysis and being able to develop systems that counter the vagaries of nature (usually what I call a circular system, i.e. in which there is no absolute power and each entity in the system is accountable to another entity – while a perfectly circular system is impossible to implement, the more developed societies are able to achieve certainty in the face of vagaries of nature, especially those emanating out the nature of man – by making their systems as circular as possible. I intend to cover this complex topic in a separate post). Thus, when developed economies seek the help of complex analytic tools such as data science or statistics, it is an attempt to understand reality and develop systems based on the analysis. However,  my pet peeve with such an approach is that most of the top manager/politicians usually tend to treat the analysis as absolute without regarding the limitations of the models. As a result, creating development takes precedence over understanding development – and this may not always lead to optimal results.

An interesting example of this is the concept of implied volatility used in pricing derivatives. Economic stability, that in turn (partly) depends on the stability of the stock markets, is one of the hallmarks of a developed society. Implied volatility (henceforth, IV) is a perfect example of creating as well understanding that stability. As the debacle of Long Term Capital Management proved, there isn’t a certain way to predict the nature of derivatives, and certainly not using the Black-Scholes model (even if you win the Nobel Prize for it!). One of the major limitation of the Black-Scholes model was the assumption that volatility was constant – something that works well in theory but fails miserably in practice. How does one go about correcting that? By reverse engineering the model and instead using the current market price to find out the volatility. Once it is accepted as a standard, then one could reasonably impose certainty on the vagaries of the outcome. However, this is not always the case. This shortcoming can lead to the development of newer metrics and systems in place, which might resolve the current shortcoming at hand, but will open up new ones. And the entire market will attempt to move those newer systems in order to once again provide a certain certainty to the vagaries of nature. One can extrapolate this to any sphere of life. Ad infinitum. This is perhaps best put forth by the famous author E.M. Forster in his short story, The Machine Stops (a fantastic must-read).

To attribute these two great developments to the Central Committee is to take a very narrow view of civilization. The Central Committee announced the developments, it is true, but they were no more the cause of them than were the kings of the imperialistic period the cause of war. Rather did they yield to some invincible pressure, which came no one knew whither, and which, when gratified, was succeeded by some new pressure equally invincible. To such a state of affairs, it is convenient to give the name of progress.

Replace the phrase “Central Committee” with any of the terms “government”, “society”, “country”, etc. and the statement still stands.

Coming back to the initial example of decentralised electrical distribution through the development of a sharing economy, electricity is perhaps the most pertinent example to showcase the idea of providing certainty to the vagaries of nature through practice. Electricity, the phenomena, is itself a product of nature, and quite a fickle one at that. It can be found only under certain circumstances and/or in certain places. However, a mark of development is to be able to provide it consistently. It is this consistency or certainty of supply, which further enables certainty of outcomes (such as transport, enabling work, automation – pretty much anything in our lives is dependent upon consistent electrical supply), which is the sign of development. And this certainty is not just achieved against the vagaries of the “nature” of nature alone. It is also achieved against the vagaries of the nature of man. Ensuring systems that do not establish monopolies, either through centralisation or decentralisation, thereby ensuring certainty of outcome to the masses in the wake of fickle human nature.

Development is a complex topic to write about and there are various ways to approach it. The varied examples I have given in the post don’t even begin to cover the extent of which this topic could be explored. That being said, the prime philosophy behind development, rather, what is generally understood as development is not that complex a task and this is what the post intends to explain. As an epilogue to this rather long post, I would like to provide a personal example – something I hope everyone has also gone through – that of turbulence in flight. In my opinion, the invention of the convenient passenger flight is perhaps the greatest invention of the past century (probably at an equivalent with the invention of the internet, if not more). A symbol of development, modern passenger flight embodies the very concept I have attempted to explain in this post – being able to establish certainty of outcome (i.e. able to travel fast and safe) in the face of vagaries of nature (as experienced by the turbulence, not to mention the extremely harsh weather outside). This has been achieved through an immense amount of practice through the ages, setting of standards (which is both a result of analysis as well as lobbying) and understanding the “nature” of both nature and of man, before being able to achieve certainty of outcome in the face of vagaries of nature of both.

 

Advertisements

The right to express

Much has been said about freedom of expression in the recent years, particularly online where freedom of expression reigns supreme and anyone who wishes to debate a particular topic being discussed should do so rationally. But that’s all theory. In practice, debates descend down to outright name calling and furious exchange of words. While certainly entertaining, it does little to advance our understanding or opinions on the subject of discussion.

All commotion and nonsense seem to prevail in the garb of freedom of expression. This leads to the vital question – does freedom of expression mean no restrictions on the way we express ourselves, or is there a requirement for the state to provide an environment wherein we can freely express ourselves?

This conundrum has been labelled as the defensive view on freedom of speech versus the empowering view on the freedom of speech. The defensive approach to speech rights originates from the works of classical liberal scholars like John Locke and John Stuart Mill to neo-liberals such as Friedrich Hayek, Robert Nozick, etc. Liberty, according to this view, is defined as the absence of coercion by government and by others. The right of the government is restricted to maintaining the individuals’ private spaces so they can pursue their goals with minimal intervention by the state or by others and exercise their freedom to express themselves. They believed that the decisions made by the plurality of individuals guided by their free-will and market forces are superior to decisions made by the government because the “state has no legitimate reason to interfere with individual decision-makers”. The role of the state is relegated to ensuring the conditions necessary to meet this competitive market economy are met. They extended this idea, coupled with their belief in the mechanisms of laissez-faire, to the freedom of speech to having a clear demarcation between the state and the private sphere when it comes to “prohibiting government interference with expression and relying on the public’s self-restraint in matters of non-governmental censorship could secure freedom of speech”.  It is argued that “freedom of speech is best served by market mechanisms that are identified with a private sphere of public opinion formation”. However, they do not take into account the ability of competitive markets to coerce and curb the freedom of speech. It is inconsequential as long as the coercion is not perpetrated by the state.

An opposing view to that of the defensive approach to freedom of speech is the empowering approach that draws its motivation for free speech from Participatory Democratic Theory, as espoused by writers like T.H. Green, John Dewey, Benjamin Barber, etc. They viewed the citizen and the government as not separate from each other but theoretically coterminous and that it was the right of the state to not only protect its citizens from coercion but also provide the necessary conditions that would enable the citizens to “collectively examine, make and enact social decisions to benefit the common good ”. Communication is assigned a central role in this process in the sense that it acts as a facilitator of social enquiry and mediation that “generate the political and social knowledge necessary to legitimise self-governance”. The empowering approach is more pragmatic than the defensive approach in the sense that it recognises that coercion can emanate from vested interests apart from the state. Thus, the government here plays a much more crucial role than the defensive approach in the sense that it recognizes liberty as the “opportunity to act” and develops and maintains the procedures, processes and institutions that provide this opportunity. This ensures that all communicative spaces are free from coercive forces of any kind thereby enabling legitimate public decision-making.

Alexander Meiklejohn once said that the purpose of free speech was not that “not everyone gets to speak but that everything worth saying gets said”.  While the defensive approach ensures that everyone gets to speak, it runs the risk that noteworthy points might drown in the noise. In such a scenario, the empowering approach is useful in the sense that the onus lies on the state to provide the right set of social conditions to ensure freedom of speech. However, this also means that the concept and practice of freedom of speech might become lopsided since the decision of what’s best for the public interest will be decided by the few and may not be representative of the best interest of the masses. What is essential, in such a scenario, is to strike a balance between the two approaches.  A plausible solution would be to outline the parameters for public interest with respect to freedom of speech and decide on the merit of each parameter, which approach would suit best.

 

Simplifying legalese

One of the prime issues that crop up frequently in data protection conferences or forums is that on the immanent complexity of legalese that prevents users from understanding the Terms & Conditions* for which they’re signing up when they join a social network, use an app or visit a website. This led to either the Users being disinterested in understanding the T&Cs. This had a huge impact on Data Protection Policies since Users were ignorant about their data was being used and more importantly, there was no clarity on who actually “owned” the data.

While it was clear that a new framework to make them aware or at the very least get them interested to explore further, the prime issue was “How”. In this regard, one of the speakers showed a video of The Scott Trust, which is a part of The Guardian Group. Unfortunately, I am unable to find the video online at the moment (Will share the URL once I do). The video informed the viewer that all data being collected on this site was to generate relevant content for the User without sharing any information with any third party that might be affiliated with The Guardian. It was quite well made and it did a good job in summarising what The Guardian’s T&Cs were in under 2 minutes (I think).

However, I see two problems with this approach. First off, and this was highlighted by another speaker in the panel, that most of the Users won’t bother to check the video or even access the page containing the video. The second problem that I felt – and this is a major concern – is that if the User did in fact, watch this video, he is likely to misconstrue the oversimplified version of the Terms of Service mentioned in the video for ALL Terms & Conditions, when in fact, it’s just a summary. That might be a problem.

Reasons for Complexity

There might be two reasons (readers are encouraged to suggest more) for the complexity of legal articles such as the Terms of Service, EULAs, etc. –

  1. The organisations are trying really hard to be very clear about where they stand regarding the terms of usage of their products/services. Whether one agrees with these terms is a matter of conjecture – one that demands resolution.
  2. Organisations know that a lot of information can be shrouded under Terms of Service since Users don’t usually read them and can create deceptive Terms of Service/EULAs, etc. Contrary to popular belief, it is not the big companies who usually engage in such practices (mostly because they know they are always under the scanner of the public and cannot get away with it).

Either way, it is of supreme importance that the Users be made aware of the implications of the Terms and Conditions. Also, it is important for the organisations to realise that it is in their benefits for Users to realise their Terms of Service since the absence of complexity makes way for a healthier and more trustworthy relationship between the institution/s and the user/s. Case in point – When Amazon decided to pull out George Orwell’s 1984 from its shelves owing to copyright infringement issues (it turns out it had been doing the same for other books too, such as Animal Farm, Twilight, books by Ayn Rand and some books from the Harry Potter series), possibly the most exemplary of comments from the list of user complaints highlighting the confusion arising out of such ambiguities is an exchange between two users –

User#1: What ticked me off is that I got a refund out of the blue and my book just disappeared out of my archive. I emailed Amazon for an answer as to what was going on and they said there was a “problem” with the book, nothing more specific. I’m sorry, when you delete my private property – refund or not – without my permission, I expect a better explanation than that. And, BTW – Pirated books showing up on Amazon – not MY problem – hire more people to check them BEFORE you sell them to me. I call BS on the “sometimes publishers pull their titles” lame excuse someone else got too.

I like the B&N analogy above – but I liken it to a B&N clerk coming to my house when I’m not home, taking a book I bought from my bookshelf and leaving cash in its place. It’s a violation of my property and this is a perfect example of why people (rightly) hate DRM.”

User#2 (in response to User#1):You don’t buy a Kindle book from Amazon. You buy a license to download it. I will bet that if you read all the fine print in the terms of service, you will see that Amazon says they can remove (or rescind, or revoke, or whatever the legal term is) the license if the book in question has been put up in violation of the copyright.

If you buy something that turns out to be stolen, it can be confiscated and returned to the legal owner with no compensation to you. You could try to get your money back from the vendor, but that would be something you would have to pursue yourself; the police wouldn’t do anything about it.

Consider how many posts there have been here where people rant and rave because Amazon doesn’t do enough to help owners of lost or stolen Kindles get them back. Now there are complaints because Amazon does make the effort to get stolen (and that’s what unauthorized books are) books “returned” to the copyright holders. Talk about a no-win situation.”

This wasn’t helped further when Drew Herdner, the spokesperson for Amazon gave the following statement to the reporters – “We are changing our systems so that in the future we will not remove books from customers’ devices in these circumstances.

The underlined phrase (underlined by me for emphasis) is further encouraging complexity since it can technically mean that Amazon can remove any book under a different circumstance/s. Alongside, intangible reputation damages, Amazon suffered from real economic damages, having to pay a plaintiff’s lawyer, $150,000 and an undisclosed sum of money to the plaintiff and the co-plaintiff.

What’s clear is that this issue (and there are other examples) arose from the fact that there was some complexity in the terms of engagement between Amazon and its users, which led to a massive financial and reputation damage for the company.

Now that the need for a clearer set of Terms & Conditions is acknowledged among the Users as well as the Organisations, the prime question is – HOW? In other words, how do we implement measures to remove the complexity arising from the Terms & Conditions?

The hard answer to this question is – There’s no panacea for this issue.

That being said, we can always attempt to minimize the issues arising out of such complexities by trying to make it more succinct and understanding of Users’ hopes and expectations. For that, one solution that I have in mind is what I would call a Terms of Service Commons or TOS Commons for short. The main aim of ToS Commons would be to create a middle ground between Organisations and Users, by attempting the following –

  • Develop a “generic Terms of Service (gToS)”, which would encapsulate a general “philosophy”, that will be applicable for all websites. A good analogy might be to think in terms of generic features of all social media websites. For example, Twitter, Facebook, and LinkedIn are all social media websites with different models and features. Yet, they offer certain common functionalities, so to speak. Such as private messaging, referring to one’s contacts by hyperlinks, sharing of photographs, etc. Thus, if one were to implement a gToS, a starting point would be to understand the common features that describe all social media websites (Please take note that I am using social media as only an example and gToS could vary per the type of website/s). This can be termed a “Read Once – Apply Always” type of document.
  • Anything that is specific to these websites’ business model (for example, Sponsored Tweets are specific to Twitter) can be covered by “specific Terms of Service (sToS)”.

In all of this, I believe an interesting approach would be a bottom-up one to understand the expectations of the User base. Wikipedia is a good inspiration on how one could build a huge corpus, which could reflect the user knowledge base. I am not saying all User recommendations need be accepted, but if a certain idea attains a “critical mass”, it should be imbibed in the larger “philosophy” of gToS.

Of course, there are challenges with implementing this approach. A couple of them that I can think are (Of course, readers are encouraged to advise newer points, one of the prime aims of the ToS Commons in the first place!) –

  • Getting a common consent among the masses on a particularly contentious issue is quite difficult. A certain Standard Operating Procedure on mediation needs to be developed to facilitate a smooth dispute resolution.
  • One of the main aims of the gToS will be to shorten an overall Terms of Service one reads on a website. However, as I mentioned previously, the gToS itself will depend a lot on the “type” of the website”. This is where it might get tricky – Categorisation of websites. Thus, is Facebook a social media site (where people communicate with their friends) or an e-commerce site (where people buy stuff)?  A possible solution would be to develop the ToS Commons in the categories of “function”. That is, Messaging, Photo Sharing, E-Commerce (a very broad term that would need to be defined VERY clearly).

Last, but not the least, the prime issue that arises is that of legitimacy. In other words, how is such an initiative likely to gain the trust and/or acceptance of everyone? The most plausible (although not completely perfect) answer to this might be the creation of a self-regulatory consortium consisting of all organisations conducting any form of interaction online, economic or otherwise. While this has its downsides (The consortium becoming lopsided in favour of the bigger players, few organisations making rules for the rest of the world, etc.), it has one major upside – It might be the first time that a (somewhat) concrete Bill of Rights for the internet could be created and implemented through a gToS.

As a concluding remark, I would like to point out that while insufficient (I don’t think it’s ever possible to create a “sufficient” Terms of Service, since new terms will create newer issues, so on and so forth), I have attempted to tackle this issue from an institutional perspective. There is another side to this issue – The Users’. Human beings are subject to cognitive limitations and numerous issues arise out of it even within the existing set of frameworks. To address this, ToS Commons can be extended to understand and address the limitations arising out cognitive limitations in Users as well by the analyses of their feedback. In a nutshell, there is no one way to simplify the complexity of legalese since it can be considered what one could term a “necessary evil”. Any attempt to simplify the complexity could end up oversimplifying the Terms of Service (as is the case with the Guardian). Therefore, a balance needs to be maintained. ToS Commons could be an interesting, but a crucial step in minimising this quandary.

*For the purposes of this article, the words “Terms & Conditions”, “Terms of Service”, “EULAs”, “Terms of Usage” will be used interchangeably. But all these terms, whether used individually or in groups, will invariably act as the umbrella term/s for all the aforementioned legal instruments.

 

Active and Passive Internet of Things

Podcast

Podcast for the article (Please note that there are some differences between the podcast and the article below, although a majority of the content remains the same. Also, the article explains the models created below and summarises their assumptions and limitations. The podcast deals more with the general idea of Passive and Active Data Collection in the Internet of Things).

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”. – Mark Weiser

The Internet of Things is a step in this very direction. And like all things new and mysterious, it has its fair share of utopian and dystopian soothsayers; with an almost certain probability that neither of their deterministic predictions will completely come to fruition in the future. However, what is interesting is the common basis on which both these viewpoints have been made: increasing reliance on the generation of data by machines as opposed to humans. And this is where I feel, there is a dire need of policy measures even before the IoT infrastructure becomes ubiquitous.

In this regard, I believe that going forward, data needs to be divided into two categories, based on the source of generation. It needs to be noted however, the focus should not be on who is generating the data, but on how the data is being generated. The last definition is crucial because even if in an M2M communication, the root message (primarily, the original data) is created by the User.

The two types of data classification are as follows –

Active Data – Active Data is the kind of data that is generated with the active consent of the User in the sense that the User consciously generates the data. This can be thought as akin to the User Generated Content on Facebook, Twitter, LinkedIn, or any other social media. While the nitty-gritty of the Terms & Conditions of these sites can be argued (i.e. the “fine print”, the opt-in/opt-out debate, etc. ), it is safe to assume that the Users generate most of the content consciously while actively consenting the to the T&C.

Passive Data – When it comes to the Internet of Things (or indeed, as some companies like to call it, The Internet of Everything), the increasing trend will be towards data generated by machines. However, this is not where the point of contention starts; it starts from how this data is generated. And the answer to this question is the subconscious behaviour of the Users. Allow me to explain. I am quite restless by nature and take breaks from sitting in a chair after every 10 – 15 minutes (Imagine sitting through an entire 1-hour lecture!). Now, this is something that I do subconsciously. In a normal non-IoT connected chair, this trait of mine might not be picked up. However, in a chair that is wired to the larger IoT infrastructure and my behavioural data shared with it can generate different insights to the third parties who are constantly monitoring my movements – Is he feeling uncomfortable? Is the ergonomics of the chair not optimal for this kind of User? – The insights can be varied and at times conflicting, thereby probably leading to less than optimal results. That might be a problem.

I am not saying that the generation of subconscious behavioural data is necessarily bad. What will set its usage apart from the good and bad will be the context in which it is used (Imagine having a heart attack in the middle of the street, one would agree that subconscious behavioural data collection would be extremely helpful in such a case!). Thus, what will be crucial from a policy perspective is the ex-post or ex-ante evidence and to understand the context in which one should consider the former over the latter and vice-versa.

The larger IoT infrastructure is a ‘Complex System’ in the sense that it is likely to exhibit ‘Strong Emergence’ – the development of behaviour at the system level that cannot be understood or described in terms of the component subsystems (Cave, 2011). IoT is foreseen primarily as making this world a more efficient place with the lesser reliance of human agency of unessential and mundane aspects of their day-to-day life, thereby allowing us to be more in control of the things that might really matter to them. But, whether such a vision will be implemented even close to its form will depend mainly on the policies that will allow us to take a step back and understand the nature of data and cross-link it with the context in which they are generated. In this regard, the ‘strong emergence’ feature of IoT might compel policymakers to contextualise policies in an ex-post rather than an ex-ante manner, with the focus being more on principles than on rules.

Models

  1. Internet of Things and Data Collection – Active and Passive Internet of Things
  2. Internet of Things and Data Collection – Active and Passive Data under Conditions of Regulation

Model Assumptions

  1. Device_C represents those devices (or groups of devices) to which we consciously feed in data. E.g. Mobile Phones, Laptops, etc.
  2. Device_Sx (where ‘x’ is a numeric suffix) represents those devices (or groups of devices) which monitor our subconscious data. E.g. Any device that’s connected to the IoT infrastructure like a chair.
  3. Device_S1 and Device_S2 are assumed to be complementary to each other. This means that the User can either use Device_S1 OR Device_S2.
  4. All behavioural data has been taken for the average civilian population from the website of Bureau of Labor Statistics.
  5. The numbers on the Y-Axis of the graphs do not mean anything in themselves since the numeric data taken is largely an assumption. However, what is important to be observed is the ratio between the amount of Active and Passive Data collected.
  6. The data generated by the User and collected by the devices is in bits.
  7. For the purpose of this model, I introduce a new unit of inferred information. I call it ‘info.’. This is NOT equal to the number of bits generated. It can be thought as the unit of the amount of inferences or insights that can be generated from the bits of data.
  8. This model is a microcosm of the entire IoT infrastructure representing a User and a finite collection of devices with which he might interact and which might interact among themselves.