\chapter{The evolution  \statusgreen}\label{chap:vision}



This chapter gives background information about the motivation for and evolution of Swarm and its vision today. \ref{sec:historical_context} lays out a historical analysis of the World Wide Web, focusing on how it became the place it is today.
\ref{sec:fair-data} introduces the concept, and explains the importance of data sovereignty, collective information and a \gloss{fair data economy}. It discusses the infrastructure a self-sovereign society will need in order to be capable of collectively hosting, moving and processing data.
Finally, \ref{sec:vision} recaps the values behind the vision, spells out the requirements of the technology and lays down the design principles that guide us in manifesting Swarm.

\section{Historical context  \statusgreen}\label{sec:historical_context}
\green{}
While the Internet in general – and the \gloss{World Wide Web} (\gloss{WWW}) in particular – dramatically reduced the costs of disseminating information, putting a publisher's power at every user's fingertips, these costs are still not zero and their allocation heavily influences who gets to publish what content and who will consume it.

In order to appreciate the problems we are trying to solve, a little journey into the history of the evolution of the \gloss{World Wide Web} is useful.

\subsection{Web 1.0 \statusgreen}\label{sec:web_1}

In the times of \gloss{Web 1.0}, in order to have your content accessible to the whole world, you would typically fire up a web server or use some free or cheap web hosting space to upload your content that could then be navigated through a set of HTML pages. If your content was unpopular, you still had to either maintain the server or pay the hosting to keep it accessible, but true disaster struck when, for one reason or another, it became popular (e.g. you got "slashdotted"). At this point, your traffic bill skyrocketed just before either your server crashed under the load or your hosting provider throttled your bandwidth to the point of making your content essentially unavailable for the majority of your audience. If you wanted to stay popular you had to invest in high availability clusters connected with fat pipes; your costs grew together with your popularity, without any obvious way to cover them. There were very few practical ways to allow (let alone require) your audience to share the ensuing financial burden directly.

The common wisdom at the time was that it would be the \gloss{ISP} that would come to the rescue since in the early days of the Web revolution, bargaining about peering arrangements between the ISP's involved arguments about where providers and consumers were located, and which ISP was making money from which other's network. Indeed, when there was a sharp imbalance between originators of TCP connection requests (aka SYN packets), it was customary for the originator ISP to pay the recipient ISP, which made the latter somewhat incentivised to help support those hosting popular content. In practice, however, this incentive structure usually resulted in putting a free \emph{pr0n} or \emph{warez} server in the server room to tilt back the scales of SYN packet counters. Blogs catering to a niche audience had no way of competing and were generally left out in the cold. Note, however, that back then, creator-publishers still typically owned their content.

\subsection{Web 2.0 \statusgreen}\label{sec:web_2}

The transition to \gloss{Web 2.0} changed much of that. The migration from a personal home page running on one's own server using Tim Berners Lee's elegantly simplistic and accessible hypertext markup language toward server-side scripting using cgi-gateways, perl and the inexorable php had caused a divergence from the beautiful idea that anyone could write and run their own website using simple tools. This set the web on a path towards a prohibitively difficult and increasingly complex stack of scripting languages and databases. Suddenly the world wide web wasn't a beginner friendly place any more, and at the same time new technologies began to make it possible to create web applications which could provide simple user interfaces to enable unskilled publishers to simply POST their data to the server and divorce themselves of the responsibilities of actually delivering the bits to their end users. In this way, the Web 2.0 was born.

Capturing the initial maker spirit of the web, sites like MySpace and Geocities now ruled the waves. These sites offered users a piece of the internet to call their own complete with as many scrolling text marquees, flashing pink glitter Comic Sans banners and all the ingenious XSS attacks a script kiddie could dream of. It was a web within the web, an accessible and open environment for users to start publishing their own content increasingly without need to learn HTML, and without rules. Platforms abounded and suddenly there was a place for everyone, a phpBB forum for any niche interest imaginable. The web became full of life and the dotcom boom showered Silicon Valley in riches.

Of course, this youthful naivete, the fabulous rainbow coloured playground wouldn't last. The notoriously unreliable MySpace fell victim to its open policy of allowing scripting. Users' pages became unreliable and the platform became unusable. When Facebook arrived with a clean-looking interface that worked, MySpace's time was up and people migrated in droves. The popular internet acquired a more self-important undertone, and we filed into the stark white corporate office of Facebook. But there was trouble in store. While offering this service for 'free,' Mr. Zuckerberg and others had an agenda. In return for hosting our data, we (the dumb f*cks \cite{carlson2010ims}) would have to trust him with it. Obviously, we did. For the time being, there was ostensibly no business model, beyond luring in more venture finance, amassing huge user-bases and we'll deal with the problem later. But from the start, extensive and unreadable T\&C's gave all the rights to the content to the platforms. While in the Web 1.0 it was easy to keep a backup of your website and migrate to a new host, or simply host it from home yourself, now those with controversial views had a new verb to deal with: 'deplatformed'.

At the infrastructure level, this centralisation began to manifest itself in unthinkably huge data-centers. Jeff Bezos evolved his book-selling business to become the richest man on Earth by facilitating those unable to deal with the technical and financial hurdles of implementing increasingly complex and expensive infrastructure. At any rate, this new constellation was capable of dealing with those irregular traffic spikes that had crippled widely successful content in the past. When others followed, soon, enough huge amounts of the web began to be hosted by a handful huge companies. Corporate acquisitions and endless streams of VC money effected more and more concentration of power. A forgotten alliance of the open source programmers who created the royalty free Apache web server, and Google, who provided paradigm-shifting ways to organise and access the exponential proliferation of data helped dealing a crippling blow to Microsoft's attempt to force the web into a hellish, proprietary existence, forever imprisoned in Internet Explorer 6. But of course, Google eventually accepted 'parental oversight,' shelved its promise to 'do no evil' and succumbed to its very own form of megalomania and began to eat the competition. Steadily, email became Gmail, online ads became AdSense and Google crept into every facet of daily life in the web.

On the surface, everything was rosy. Technological utopia hyper-connected the world in a way no-one could have imagined. No longer was the web just for academics and the super 1337, it made the sum of human knowledge available to anyone, and now that smartphones became ubiquitous, it could be accessed anywhere. Wikipedia, gave everyone superhuman knowledge, Google allowed us to find and access it in a moment and Facebook gave us the ability to communicate with everyone we had ever known, for free. However, underneath all this, there was one problem buried just below the glittering facade. Google knew what they were doing. So were Amazon, Facebook and Microsoft. So did some punks, since 1984.

The time came to cut a cheque to the investors, once the behemoth platforms had all the users. The time to work out a business model had come. To provide value back to the shareholders, the content providing platforms found advertising revenue as panacea. And little else. Google probably really tried but could not think of any alternative. Now the web started to get complicated, and distracting. Advertising appeared everywhere and the pink flashing glitter banners were back, this time pulling your attention from the content you came for to deliver you to the next user acquisition opportunity.

And as if this weren't enough, there were more horrors to come. The Web lost the last shred of its innocence when the proliferation of data became unwieldy and algorithms were provided to 'help' to better provide us access to the content that we want. Now the platforms had all our data, they were able to analyse it to work out what we wanted to see, seemingly knowing us even better than we ever knew ourselves. Everyone would be fed their most favourite snack, along with the products they would most likely buy. There was a catch to these secret algorithms and all encompassing data-sets: they were for sale to the highest bidder. Deep-pocketed political organisations were able to target swing voters with unprecedented accuracy and efficacy. Cyberspace became a very real thing all of sudden, just as consensus as a normality became a thing of the past. News did not only become fake, but personally targeted manipulation, as often as not to nudge you to act against your best interest, without even realising it.

The desire to save on hosting costs had turned everyone into a target to become a readily controllable puppet. Some deal.

At the same time, more terrifying revelations lay in wait. It turned out the egalitarian ideals that had driven the initial construction of a trustful internet were the most naive of all. In reality, the DoD had brought it to you, and now wanted it back. Edward Snowden walked out of the NSA with a virtual stack of documents no-one could have imagined. Instead of course, if you had taken the Bourne Conspiracy for being a documentary. It turned out that the protocols were broken, and all the warrant canaries long dead -- the world's governments had been running a surveillance dragnet on the entire world population -- incessantly storing, processing, cataloguing, indexing and providing access to the sum total of a persons online activity. It was all available at the touch of an XKeyStore button, an all seeing, unblinking Optical Nerve determined to 'collect it all' and 'know it all' no matter who or what the context. Big Brother turned out to look like Sauron.
The gross erosion of privacy – preceded by many other, similar efforts by variously power-drunk or megalomaniac institutions and individuals across the world to track and block the packets of suppressed people, political adversaries or journalists, targeted by repressive regimes – had provided impetus for the Tor project. This unusual collaboration between the US Military, MIT and the EFF had responded by providing not only a way to obfuscate the origin of a request but also to serve up content in a protected, anonymous way. Wildly successful and a household name in some niches, it has not found much use outside them, due to a relatively high latency that results from its inherent inefficiencies.

By the time of the revelations of Snowden, the web had become ubiquitous and completely integral to almost every facet of human life, but the vast majority of it was run by corporations. While reliability problems had become a thing of the past, there was a price to pay. Context-sensitive, targeted advertising models now extended their Faustian bargain to content producers, with a grin that knew there was no alternative. "We will give you scalable hosting that will cope with any traffic your audience throws at it", they sing, "but in return you must give us control over your content: we are going to track each member of your audience and collect (and own, *whistle*) as much of their personal data as we are able to. We will, of course, decide who can and who cannot see it, as is our right, no less. And we will proactively censor it, and  naturally share your data with authorities whenever prudent to protect our business.". As a consequence, millions of small content producers created immense value for a hand-full of mega corporations, getting peanuts in return. Typically, free hosting and advertisement. What a deal!

Putting aside, for a moment, the resulting FUD of the Web 2.0 data and news apocalypse that we witness today, there are also a couple of technical problems with the architecture. The corporate approach has engendered a centralist maxim, so that all requests now must be routed through some backbone somewhere, to a monolith data-center, then passed around, processed, and finally returned back. Even if to simply send a message to someone in the next room. This is client-server architecture, which also – no afterthought – has at best flimsy security and was so often breached that it became the new normal, leaving the oil-slicks of unencrypted personal data and even plaintext passwords in its wake, spread all over the web. The last nail in the coffin is the sprawl of incoherent standards and interfaces this has facilitated. Today, spaghetti code implementations of growing complexity subdivide the web into multifarious micro-services. Even those with the deep pockets find it increasingly difficult to deal with the development bills, and it is common now that fledgling start-ups drown in a feature-factory sea of quickly fatal, spiralling technical debt. A modern web application's stack in all cases is a cobbled together Heath-Robinson machine comprising so many moving parts that it is almost impossible even for a supra-nation-state corporation to maintain and develop these implementations without numerous bugs and regular security flaws. Well, except Google and Amazon to be honest. At any rate. It is time for a reboot. In the end, it's the data describing our lives. They already try but yet they have no power to lock us into this mess.


\subsection{Peer-to-peer networks \statusgreen}\label{sec:peer_to_peer}

As the centrist Web 2.0 took over the world, the \gloss{peer-to-peer} (\gloss{P2P}) revolution was also gathering pace, quietly evolving in parallel. Actually, P2P traffic had very soon taken over the majority of packets flowing through the pipes, quickly overtaking the above mentioned SYN-bait servers. If anything, it proved beyond doubt that end-users, by working together to use their hitherto massively underutilised \emph{upstream bandwidth}, could provide the same kind of availability and throughput for their content as previously only achievable with the help of big corporations and their  data centers attached as they are to the fattest pipes of the Internet's backbone. What's more, it could be realized at a fraction of the cost. Importantly, users retained a lot more control and freedom over their data. Eventually, this mode of data distribution proved to be remarkably resilient even in the face of powerful and well-funded entities exerting desperate means to shut it down.

However, even the most evolved mode of \gloss{P2P} file sharing, tracker-less \gloss{BitTorrent} \cite{pouwelse2005bittorrent}, was only that: file-level sharing. Which was not at all suitable for providing the kind of interactive, responsive experience that people were coming to expect from web applications on \gloss{Web 2.0}. In addition to this, while becoming extremely popular, BitTorrent was not conceived of with economics or game theory in mind, i.e.\ very much in the era before the world took note of the revolution it's namesake would precipitate: to say, before anyone understood blockchains and the power of cryptocurrency and incentivisation.

\subsection{The economics of BitTorrent and its limits \statusgreen}

The genius of BitTorrent lies in its clever resource optimisation \cite{cohen2003incentives}: if many clients want to download the same content from you, give them different parts of it and in a second phase, let them swap the parts between each another in a tit-for-tat fashion, until everyone has all the parts. This way, the upstream bandwidth use of a user hosting  content (the \gloss{seeder} in BitTorrent parlance) is roughly always the same, no matter how many clients want to download the content simultaneously. This solves the most problematic, ingrained issue of the ancient, centralised, master-and-slave design of \gloss{HTTP}, the protocol underpinning the \gloss{World Wide Web}.

Cheating (i.e.\ feeding your peers with garbage) is discouraged by the use of hierarchical, piece-wise hashing, whereby a package offered for download is identified by a single short hash and any part of it can be cryptographically proven to be a specific part of the package without knowledge about any of the other parts, and at the cost of only a very small computational overhead. 

But this beautifully simple approach has five consequential shortcomings, \cite{locher2006free,piatek2007incentives}, all somewhat related.

\begin{itemize}
\item \emph{lack of economic incentives} -- 
There are no built-in incentives to seed downloaded content. In particular, one cannot exchange one's upstream bandwidth provided by seeding one's content, for downstream bandwidth required for downloading other content. Effectively, the upstream bandwidth provided by seeding content to users is not rewarded. Because as much upstream as possible can improve the experience with some online games, it can be a rational if selfish choice to switch seeding off. Add some laziness and it stays off forever.

\item \emph{initial latency} -- 
 Typically, downloads start slowly and with some delay. Clients that are further ahead in downloading have significantly more to offer to newcomers than they can offer in return. I.e. the newcomers have nothing to download (yet) for those further ahead. The result of this is that BitTorrent downloads start as a trickle before turning into a full-blown torrent of bits. This peculiarity has severely limited the use of BitTorrent for interactive applications that require both fast responses and high bandwidth. Even though it would otherwise constitute a brilliant solution for many games.
 
\item \emph{lack of fine-granular content addressing} -- Small \glossplural{chunk} of data can only be shared as parts of the larger file that they are part of. They can be pinpointed for targeted that leaves the rest of a file out to optimise access. But peers for the download can only be found by  querying the \gloss{distributed hash table} (\gloss{DHT}) for a desired \emph{file}. It is not possible to look for peers at the chunk-level, because the advertising of the available content happens exclusively at the level of files. This leads to inefficiencies as the same chunks of data can often appear verbatim in multiple files. So, while theoretically, all peers who have the chunk could provide it, there is no way to find those peers, because only its enveloping file has a name (or rather, an announced hash) and can be sought for.

\item \emph{no incentive to keep sharing} --
Nodes are not rewarded for their sharing efforts (storage and bandwidth) once they have achieved their objective, i.e.\ retrieving all desired files from their peers.

\item \emph{no privacy or ambiguity} --
Nodes advertise exactly the content they are seeding. It is easy for attackers to discover the IP address of peers hosting content they would like to see removed, and then as a simple step for adversaries to DDOS them, or for corporations and nation states to petition the ISP for the physical location of the connection. This has led to a grey market of VPN providers helping users circumvent this. Although these services offer assurances of privacy, it is usually impossible to verify them as their systems  are usually closed source. 
\end{itemize}

To say, while spectacularly popular and very useful, BitTorrent is at the same time primitive, a genius first step. It is how far we can get simply by sharing our upstream bandwidth, hard-drive space, and a tiny amounts of computing power – without proper accounting and indexing. However – surprise! – if we add just a few more emergent technologies to the mix, most importantly of course, the \gloss{blockchain}, we get something that truly deserves the \gloss{Web 3.0} moniker: a decentralised, censorship-resistant device for sharing, and also for collectively creating content, all while retaining full control over it. What's more, the cost of this is almost entirely covered by using and sharing the resources supplied by the breathtakingly powerful, underutilized super-computer (by yesteryear's standards :-) ) that you already own.

\subsection{Towards Web 3.0 \statusgreen}\label{sec:towards-web3}

% 0/ intro talk about the limitations and problems with web2 app architecture 
% 1/ why the game has changed in a post-satoshi world
% As the blockchain has brought us the ability to 
% 2/ why swarm represents a further iteration on this change and makes the whole thing usable, how it overcomes limitations of the blockchain, emphasis the VC problem, talk about making the web fun again
% 3/ some short exploration of current attempts to provide this and their potential limitations, but keep this short, unemotional and unbiased
% 4/ drum up to grand ending of how swarm will provide trustless computing save the world etc. etc.

\begin{centerverbatim}
The Times 03/
Jan/2009 Chancel
lor on brink of 
second bailout f
or banks
\end{centerverbatim}

On (or after) 6:15 Saturday the 3rd of January 2009, the world changed forever. A mysterious Cypherpunk created the first block of a chain that would come to encircle the entire world, and the genie was out of the bottle. This first step would put in motion a set of reactions that would result in an unprecedentedly humongous amount of money flowing from the traditional reservoirs of fiat and physical goods into a totally new vehicle to store and transmit value: cryptocurrency. 'Satoshi Nakamato' had managed to do something no-one else had been able to, he had, de facto, yet at small scale, disintermediated the banks, decentralised trustless value transfer, and since that moment, we are effectively back at gold standard: everyone can now own central bank money. Money that no-one can multiply or inflate out of your pocket. What's more: everyone can now print money themselves that comes with its own central bank and electronic tranmission system. It is still not well understood how much this will change our economies.

This first step was a huge one and a monumental turning point. Now we had authentication and value transfer baked into the system at its very core. But as much as it was conceptually brilliant, it had some minor and not so minor problems with utility. It allowed to transmit digital value, one could even 'colour' the coins or transmit short messages like the one above that marks the fateful date of the first block. But that's it. And, regarding scale ... every transaction must be stored on every node. Sharding was not built-in. Worse, the protection if the digital money made it necessary that every node processed exactly the same as every other node, all the time. This was the opposite of a parallelised computing cluster and millions of times slower.

When Vitalik conceived of Ethereum, he accepted some of these limitations but the utility of the system took a massive leap. He added the facility for Turing-complete computation via the \gloss{Ethereum Virtual Machine} which enabled a cornucopia of applications that would run in this trustless setting. The concept was at once a dazzling paradigm shift, and a consistent evolution of Bitcoin, which itself was based on a tiny virtual machine, with every single transactions really being – unbeknownst to many – a mini program. But Ethereum went all the way and that again changed everything. The possibilities were numerous and alluring and \gloss{Web 3.0} was born.

However, there was still a problem to overcome when transcending fully from the Web 2.0 world: storing data on the blockchain was prohibitively expensive for anything but a tiny amount. Both Bitcoin and Ethereum had taken the layout of BitTorrent and run with it, complementing the architecture with the capability to transact but leaving any consideration about storage of non-systemic data for later. Bitcoin had in fact added a second, much less secured circuit below the distribution of blocks: candidate transactions are shipped around without fanfare, as secondary citizens, literally without protocol. Ethereum went further, separated out the headers from the blocks, creating a third tier that ferried the actual block data around ad-hoc, as needed. Because both classes of data are essential to the operation of the system, these could be called critical design flaws. Bitcoin's maker probably didn't envision a reality where mining had become the exclusive domain of a highly specialized elite. Any transactor will have been thought to be able to basically mine their own transactions. Ethereum faced the even harder challenge of data availability and presumably because it was always obvious that the problem could be addressed separately later, just ignored it for the moment. 

In other news, the straightforward approach for data dissemination of \gloss{BitTorrent} had successfully been implemented for web content distribution by \gloss{ZeroNet} \cite{zeronet}. However, because of the aforementioned issues with BitTorrent, ZeroNet turned out unable to support the responsiveness that users of web services have come to expect.

In order to try to enable responsive, \glossplural{distributed web application} (or \glossplural{dapp}), the \gloss{InterPlanetary File System} (\gloss{IPFS}) \cite{ipfs2014}  introduced their own major improvements over BitTorrent. A stand-out feature being the highly web-compatible, URL-based retrieval scheme. In addition, the directory of the available data, the indexing, (like BitTorrent organized as a \gloss{DHT}) was vastly improved, making it possible to also search for a small part of any file, called a \gloss{chunk}.

There are numerous other efforts to fill the gap and provide a worthy Web 3.0 surrogate for the constellation of servers and services that have come to be expected by a Web 2.0 developer, to offer a path to emancipation from the existing dependency on the centralized architecture that enables the data reapers. These are not insignificant roles to supplant, even the most simple web app today subsumes an incredibly large array of concepts and paradigms which have to be remapped into the \gloss{trustless} setting of Web 3.0. In many ways, this problem is proving to be perhaps even more nuanced than implementing trustless computation in the blockchain. Swarm responds to this with an array of carefully designed data structures, which enable the application developer to recreate concepts we have grown used to in Web 2.0, in the new setting of Web 3.0. Swarm successfully reimagines the current offerings of the web, re-implemented on solid, cryptoeconomic foundations.

Imagine a sliding scale, starting on the left with: large file size, low frequency of retrieval and a more monolithic \gloss{API}; to the right: small data packets, high frequency of retrieval, and a nuanced API. On this spectrum, file storage and retrieval systems like a posix filesystem, S3, Storj and BitTorrent live on the left hand side. Key--value stores like LevelDB and databases like MongoDB or Postgres live on the right. To build a useful app, different modalities, littered all over the scale are needed, and furthermore there must be the ability to combine data where necessary, and to ensure only authorised parties have access to protected data. In a centrist model, it is easy to handle these problems initially, getting more difficult with growth, but every range of the scale has a solution from one piece of specialized software or another. However, in the decentralised model, all bets are off. Authorisation must be handled with cryptography and the combination of data is limited by this. As a result, in the nascent, evolving Web 3.0 stack of today, many solutions deal piecemeal with only part of this spectrum of requirements. In this book, you will learn how Swarm spans the entire spectrum, as well as providing high level tools for the new guard of Web 3.0 developers. The hope is that from an infrastructure perspective, working on Web 3.0 will feel like the halcyon days of Web 1.0, while delivering unprecedented levels of agency, availability, security and privacy.

To respond to the need for privacy to be baked in at the root level in file-sharing – as it is so effectively in Ethereum – Swarm enforces anonymity at an equally fundamental and absolute level. Lessons from Web 2.0 have taught us that trust should be given carefully and only to those that are deserving of it and will treat it with respect. Data is toxic \cite{schneier2019Jul}, and we must treat it carefully in order to be responsible to ourselves and those for whom we take responsibility. We will explain later, how Swarm provides complete and fundamental user privacy.

And of course, to fully transition to a Web 3.0-decentralised world, we also deal with the dimensions of incentives and trust, which are traditionally 'solved' by handing over responsibility to the (often untrustworthy) centralised gatekeeper. As we have noted, this is one problem that BitTorrent also struggled to solve, and that it responded to with a plethora of seed ratios and private (i.e., centralised) trackers.

The problem of lacking incentive to reliably host and store content is apparent in various projects such as ZeroNet or MaidSafe. Incentivisation for distributed document storage is still a relatively new research field, especially in the context of blockchain technology. The Tor network has seen suggestions \cite{jansen2014onions,ghoshetal2014tor} but these schemes are mainly academic, they are not built into the heart of the underlying system. Bitcoin has been repurposed to drive other systems like Permacoin \cite{miller2014permacoin}, some have created their own blockchain, such as Sia \cite{vorick2014sia} or Filecoin \cite{filecoin2014} for \gloss{IPFS}. BitTorrent is currently testing the waters of blockchain incentivisation with their own token \cite{tron2018,bittorrent2019}. However, even with all of these approaches combined, there would still be many hurdles to overcome to provide the specific requirements for a Web 3.0 dapp developer.

We will see later how Swarm provides a full suite of incentivisation measures, as well as other checks and balances to ensure that nodes are working to benefit the whole of the ... swarm. This includes the option to rent out large amounts of disk space to those willing to pay for it – irrespective of the popularity of their content – while ensuring that there is also a way to deploy your interactive dynamic content to be stored in the cloud, a feature we call \gloss{upload and disappear}.

The objective of any incentive system for \gloss{peer-to-peer} content distribution is to encourage cooperative behavior and discourage \gloss{freeriding}: the uncompensated depletion of limited resources. The \gloss{incentive strategy} outlined here aspires to satisfy the following constraints:

\begin{itemize}
    \item It is in the node's own interest, regardless of whether other nodes follow it.
    \item It must be expensive to expend the resources of other nodes.
    \item It does not impose unreasonable overhead.
    \item It plays nice with "naive" nodes.
    \item It rewards those that play nice, including those following this strategy.
\end{itemize}

In the context of Swarm, storage and bandwidth are the two most important limited resources and this is reflected in our incentives scheme. The incentives for bandwidth use are designed to achieve speedy and reliable data provision, while the storage incentives are designed to ensure long term data preservation. In this way, we ensure that all requirements of web application development are provided for – and that incentives are aligned so that each individual node's actions benefit not only itself, but the whole of the network. 

\section{Fair data economy  \statusgreen}\label{sec:fair-data}
\green{}

In the era of \gloss{Web 3.0}, the Internet is no longer just a niche where geeks play, but has become a fundamental conduit of value creation and a huge share of overall economic activity. Yet the data economy in it's current state is far from fair, the distribution of the spoils is under the control of those who control the data - mostly companies keeping the data to themselves in isolated \glossplural{data silo}. To achieve the goal of a \gloss{fair data economy} many social, legal and technological issues will have to be tackled. We will now describe some of the issues as they  currently present and how they will be addressed by Swarm. 

\subsection{The current state of the data economy  \statusgreen} \label{sec:dataeconomy}

Digital mirror worlds already exist, virtual expanses that contain shadows of physical things and consist of unimaginably large amounts of data \cite{MirrorWorlds2020Feb}. Yet more and more data will continue to be synced to these parallel worlds, requiring new infrastructure and markets, and creating new business opportunities. Only relatively crude measures exist for measuring the size of the data economy as a whole, but for the USA, one figure puts the financial value of data (with related software and intellectual property) at \$1.4trn-2trn in 2019 \cite{MirrorWorlds2020Feb}. The EU Commission projects the figures for the data economy in the EU27 for 2025 at €829bln (up from €301bln in 2018) \cite{EUDataStrategy2020Feb}. 

Despite this huge amount of value, the asymmetric distribution of the wealth generated by the existing data economy has been put forward as a major humanitarian issue \cite{TheWinner2020Feb}. As efficiency and productivity continue to rise, as a results of better data, the profits that result from this will need to be distributed. Today, the spoils are distributed unequally: the larger the companies' data set, the more it can learn from the data, attract more users and hence even more data. Currently, this is most apparent with the dominating large tech companies such as \gloss{FAANG}, but it is predicted that this will also be increasingly important in non-tech sectors, even nation states. Hence, companies are racing to become dominant in a particular sector, and countries hosting these platforms will gain an advantage. As Africa and Latin America host so few of these, they risk becoming exporters of raw data and then paying other countries to import the intelligence provided, as has been warned by the United Nations Conference on Trade and Development \cite{TheWinner2020Feb}. Another problem is that if a large company monopolises a particular data market, it could also become the sole purchaser of data - maintaining a complete control of setting prices and affording the possibility that the "wages" for providing data could be manipulated to keep them artificially low. In many ways, we are already seeing evidence of this. 

% move this?
% As a solution, citizens could organise into "data co-operatives", who would then act as trade unions do in conventional economy. 

Flows of data are becoming increasingly blocked and filtered by governments, using the familiar reasoning based on the protection of citizens, sovereignty and national economy \cite{VirtualNationalism2020Feb}. Leaks by several security experts have shown that for governments to properly give consideration to national security,  data should be kept close to home and not left to reside in other countries. GDPR is one such instance of a "digital border" that has been erected -- data may leave the EU only if appropriate safeguards are in place. Other countries, such as India, Russia and China, have implemented their own geographic limitations on data. The EU Commission has pledged to closely monitor the policies of these countries and address any limits or restrictions to data flows in trade negotiations and through the actions in the World Trade Organization \cite{EUWhitePaperAI2020Feb}.

Despite this growing interest in the ebb and flow of data, the big tech corporations maintain a firm grip on much of it, and the devil is in the details. Swarm's privacy-first model requires that no personal data be divulged to any third parties, everything is end-to-end encrypted out of the box, ending the ability of service providers to aggregate and leverage giant datasets. The outcome of this is that instead of being concentrated at the service provider, control of the data remains decentralised and with the individual to which it pertains. And as a result, so does the power. Expect bad press.

\subsection{The current state and issues of data sovereignty \statusgreen }\label{sec:data-sovereignty}

As a consequence of the Faustian bargain described above, the current model of the \gloss{World Wide Web} is flawed in many ways. As a largely unforeseen consequence of economies of scale in infrastructure provision, as well as network effects in social media, platforms became massive data silos where vast amounts of user data passes through, and is held on, servers that belong to single organisations. This 'side-effect' of the centralized data model has allowed large private corporations the opportunity to collect, aggregate and analyse user data, positioning their data siphons right at the central bottleneck: the cloud servers where everything meets. This is exactly what David Chaum predicted in 1984, kicking off the Cypherpunk movement that Swarm is inspired by.

The continued trend of replacing human-mediated interactions with computer-mediated interactions, combined with the rise of social media and the smartphone, has resulted in more and more information about our personal and social lives becoming readily accessible to the companies provisioning the data flow. These have unraveled lucrative data markets where user demographics are linked with underlying behavior, to understand you better than you understand yourself. A treasure trove for marketeers.

Data companies have meanwhile evolved their business models towards capitalising on the sale of the data, rather than the service they initially provided. Their primary source of revenue is now selling the results of user profiling to advertisers, marketeers and others who would seek to 'nudge' members of the public. The circle is closed by showing such advertisement to users on the same platforms, measuring their reaction, thus creating a feedback loop. A whole new industry has grown out of this torrent of information, and as a result, sophisticated systems emerged that predict, guide and influence ways for users to entice them to allocate their attention and their money, openly and knowingly exploiting human weaknesses in responding to stimuli, often resorting to highly developed and calculated psychological manipulation. The reality is, undisputedly, that of mass manipulation in the name of commerce, where not even the most aware can truly exercise their freedom of choice and preserve their intrinsic autonomy of preference regarding consumption of content or purchasing habits.

The fact that business revenue is coming from the demand for micro-targeted users to present adverts to is also reflected in quality of service. The content users' needs – who used to be and should continue to be the 'end' user – became secondary to the needs of the "real" customers: the advertiser, often leading to ever poorer user experience and quality of service. This is especially painful in the case of social platforms when  inertia caused by a network effect essentially constitutes user lock-in. It is imperative to correct these misaligned incentives. In other words, provide the same services to users, but without such unfortunate incentives as they are resulting from the centralised data model.

The lack of control over one's data has serious consequences on the economic potential of the users. Some refer to this situation, somewhat hysterically, as \gloss{data slavery}. But they are technically correct: our digital twins are  captive to corporations, put to good use for them, without us having any agency, quite to the contrary, to manipulate us out of it and make us less well informed and free. 

The current system then, of keeping data in disconnected datasets has various drawbacks: 

\begin{itemize}
    \item \emph{unequal opportunity} - Centralised entities increase inequality as their systems siphon away a disproportionate amount of profit from the actual creators of the value.
    \item \emph{lack of fault tolerance} - They are a single point of failure in terms of technical infrastructure, notably security.
    \item \emph{corruption} - The concentration of decision making power constitutes an easier target for social engineering, political pressure and institutionalised corruption.
    \item \emph{single attack target} - The concentration of large amounts of data under the same security system attrackts attacks as it increases the potential reward for hackers. 
    \item \emph{lack of service continuity guarantees} - Service continuity is in the hands of the organisation, and is only weakly incentivised by reputation. This introduces the risk of inadvertent termination of the service due to bankruptcy, or regulatory or legal action.
    \item \emph{censorship} - Centralised control of data access allows for, and in most cases eventually leads to, decreased freedom of expression.
    \item \emph{surveillance} - Data flowing through centrally owned infrastructure offers perfect access to traffic analysis and other methods of monitoring.
    \item \emph{manipulation} - Monopolisation of the display layer allows the data harvesters to have the power to manipulate opinions by choosing which data is presented, in what order and when, calling into question the sovereignty of individual decision making.
\end{itemize}


\subsection{Towards self-sovereign data \statusgreen} \label{sec:selfsovereigndata}

We believe that decentralisation is a major game-changer, which by itself solves a lot of the problems listed above.

We argue that blockchain technology is the final missing piece in the puzzle to realise the cypherpunk ideal of a truly self-sovereign Internet. As Eric Hughes argued in the \emph{Cypherpunk Manifesto} \cite{hughes1993} already in 1993, "We must come together and create systems which allow anonymous transactions." One of the goals of this book is to demonstrate how decentralised consensus and peer-to-peer network technologies can be combined to form a rock-solid base-layer infrastructure. This foundation is not only resilient, fault tolerant and scalable; but also egalitarian and economically sustainable, with a well designed system of incentives. Due to a low barrier of entry for participants, the adaptivity of these incentives ensures that prices automatically converge to the marginal cost. On top of this, add Swarm's strong value proposition in the domain of privacy and security.

Swarm is a \gloss{Web 3.0} stack that is decentralised, incentivised, and secured. In particular, the platform offers participants solutions for data storage, transfer, access, and authentication. These data services are more and more essential for economic interactions. By providing universal access to all for these services, with strong privacy guarantees and without borders or external restrictions, Swarm fosters the spirit of global voluntarism and represents the \emph{infrastructure for a self-sovereign digital society}.

\subsection{Artificial intelligence and self-sovereign data \statusgreen} \label{sec:AIdata}

Artificial Intelligence (AI) is promising to bring about major changes to our society. On the one hand, it is envisioned to allow for a myriad of business opportunities, while on the other hand it is expected to displace many professions and jobs, where not merely augmenting them \cite{Lee2018Sep}.

The three "ingredients" needed for the prevalent type of AI today, machine learning (ML), are: computing power, models and data. Today, computing power is readily available and specialized hardware is being developed to further facilitate processing. An extensive headhunt for AI talent has been taking place for more than a decade and companies have  managed to monopolise workers in possession of the specialised talents needed to work on the task to provide the models and analysis. However, the dirty secret of today's AI, and deep learning, is that the algorithms, the 'intelligent math' is already commoditised. It is Open Source and not what Google or Palantir make their money with. The true 'magic trick' to unleash their superior powers is to get access to the largest possible sets of data.

Which happen to be the organizations that have been systematically gathering it in data silos, often by providing the user with an application with some utility such as search or social media, and then stockpiling the data for later use very different from that imagined by the 'users' without their express consent and not even knowledge. This monopoly on data has allowed multinational companies to make unprecedented profits, with only feeble motions to share the financial proceeds with the people whose data they have sold. Potentially much worse though, the data they hoard is prevented from fulfilling its potentially transformative value not only for individuals but for society as a whole.

Perhaps it is no coincidence, therefore, that the major data and AI "superpowers" are emerging in the form of the governments of the USA and China and the companies that are based there. In full view of the citizens of the world, an AI arms-race is unfolding with almost all other countries being left behind as "data colonies" \cite{HarariDavos2020Mar}. There are warnings that as it currently stands, China and the United States will inevitably accumulate an insurmountable advantage as AI superpowers \cite{Lee2018Sep}.

It doesn't have to be so. In fact, it likely won't because the status quo is a bad deal for billions of people. Decentralised technologies and cryptography are the way to allow for privacy of data, and at the same time enable a fair data economy to gradually emerge that will present all of the advantages of the current centralised data economy, but without the pernicious drawbacks. This is the change that many consumer and tech organizations across the globe are working for, to support the push back against the big data behemoths, with more and more users beginning to realise that they have been swindled into giving away their data. Swarm will provide the infrastructure to facilitate this liberation.

Self-sovereign storage might well be the only way that individuals can take back control of their data and privacy, as the first step towards reclaiming their autonomy, stepping out of the filter bubble and reconnect instead with their own culture. Swarm represents at its core solutions for many of the problems with today's Internet and the distribution and storage of data. It is built for privacy from the ground up, with intricate encryption of data and completely secure and leak-proof communication. Furthermore, it enables sharing of selected data with 3rd parties, on the terms of the individual. Payments and incentives are integral parts of Swarm, making financial compensation in return for granular sharing of data a core concern.

Because as Hughes wrote, "privacy in an open society requires anonymous transaction systems. ... An anonymous transaction system is not a secret transaction system. An anonymous system empowers individuals to reveal their identity when desired and only when desired; this is the essence of privacy."

Using Swarm will allow to leverage a fuller set of data and to create better services, while still having the option to contribute it to the global good with self-verifiable anonymisation. The best of all worlds.

This new, wider availability of data, e.g. for young academic students and startups with disruptive ideas working in the AI and big-data sectors, would greatly facilitate the evolution of the whole field that has so much to contribute to science, healthcare, eradication of poverty, environmental protection, disaster prevention, to name a few; but which is currently at an impasse, despite its eye-catching success for robber barons and rogue states. With the facilities that Swarm provides, a new set of options will open up for companies and service providers, different but no less powerful. With widespread decentralisation of data, we can collectively own the extremely large and valuable data sets that are needed to build state-of-the-art AI models. The portability of this data, already a trend that is being hinted at in traditional tech, will enable competition and – as before – personalised services for individuals. But the playing field will be levelled for all, driving innovation worthy of the year 2020. 



\subsection{Collective information \statusgreen}\label{sec:collective_information}

\glossupper{collective information} began to accumulate from the first emergence of the Internet, yet the concept has just recently become recognized and discussed under a variety of headings such as  \emph{open source}, \emph{fair data} or \emph{information commons}.

A collective, as defined by Wikipedia (itself an example of "collective information") is:
\begin{displayquote}
"A group of entities that share or are motivated by at least one common issue or interest, or work together to achieve a common objective." 
\end{displayquote}
The internet allows collectives to be formed on a previously unthinkable scale, despite differences in geographic location, political convictions, social status, wealth, even general freedom, and other demographics. Data produced by these collectives, through joint interaction on public forums, reviews, votes, repositories, articles and polls is a form of collective information – as is the metadata that emerges from the traces of these interactions. Since most of these interactions, today, are facilitated by for-profit entities running centralized servers, the collective information ends up being stored in data silos owned by a commercial entity, the majority concentrated in the hands of a few big technology companies. And while the actual work results are often in the open, as the offering of these providers, the metadata, which can often be the more valuable, powerful and dangerous representation of the interaction of contributors, is usually held and monetized secretly.

These "platform economies" have already become essential and are only becoming ever more important in a digitising society. We are, however, seeing that the information the commercial players acquire over their users is increasingly being used against the very users' best interests. To put it mildly, this calls into question whether these corporations are capable of bearing the ethical responsibility that comes with the power of keeping our \gloss{collective information}.

While many state actors are trying to obtain unfettered access to the collective mass of personal data of individuals, with some countries demanding magic key-like back-door access, there are exceptions. Since AI has the potential for misuse and ethically questionable use, a number of countries have started 'ethics' initiatives, regulations and certifications for AI use, for example the German Data Ethics Commission or Denmark's Data Ethics Seal. 

Yet, even if corporations could be made to act more trustworthy, as would be appropriate in the light of their great responsibility, the mere existence of \glossplural{data silo} stifles innovation. The basic shape of the client-server architecture itself  led to this problem, as it has made centralised data storage (on the 'servers' in their 'farms') the default (see \ref{sec:web_1} and \ref{sec:web_2}). Effective peer-to-peer networks such as Swarm (\ref{sec:peer_to_peer}) now make it possible to alter the very topology of this architecture, thus enabling the collective ownership of collective information. 


\section{The vision  \statusorange}\label{sec:vision}

\wip{Gregor: still working on text}

\begin{displayquote}
Swarm is infrastructure for a self-sovereign society. 
\end{displayquote}


\subsection{Values \statusorange}\label{sec:values}

Self-sovereignty implies freedom. If we break it down, this implies the following metavalues: 

\begin{itemize}
\item \emph{inclusivity} - Public and permissionless participation.  
\item \emph{integrity} - Privacy, provable provenance. 
\item \emph{incentivisation} - Alignment of interest of node and network.
\item \emph{impartiality} -  Content and value neutrality.  
\end{itemize}

These metavalues can be thought of as systemic qualities which contribute to empowering individuals and collectives to gain self-sovereignty.

Inclusivity entails we aspire to include the underprivileged in the data economy; and to lower the barrier of entry to define complex data flows and to build decentralised applications. Swarm is a network with open participation: for providing services and permissionless access to publishing, sharing, and investing your data.

Users are free to express their intention as 'action' and have full authority to decide if they want to remain anonymous or share their interactions and preferences. The integrity of online persona is required. 

Economic incentives serve to make sure participants' behaviour align with the desired emergent behaviour of the network (see \ref{sec:incentivisation}). 

Impartiality ensures content neutrality and prevents gate-keeping. It also reaffirms that the other three values are not only necessary but sufficient: it rules out values that would treat any particular group as privileged or express preference for particular content or data from any particular source. 

\subsection{Design principles \statusorange}\label{sec:design-principles}
 

The Information Society and its data economy bring about an age where online transactions and big data are essential to everyday life and thus the supporting technology is critical infrastructure. It is imperative therefore, that this base layer infrastructure be \emph{future proof}, i.e., provided with strong guarantees for continuity. 

Continuity is achieved by the following generic requirements expressed as \emph{systemic properties}:

\begin{itemize}
\item \emph{stable} - The specifications and software implementations be stable and resilient to changes in participation, or politics (political pressure, censorship).
\item \emph{scalable} - The solution be able to accommodate many orders of magnitude more users and data than starting out with, without leading to prohibitive reductions in performance or reliability during mass adoption.  
\item \emph{secure} - The solution is resistant to deliberate attacks, and immune to social pressure and politics, as well as tolerating faults in its technological dependencies (e.g. blockchain, programming languages). 
\item \emph{self-sustaining} - The solution runs by itself as an autonomous system, not depending on human or organisational coordination of collective action or any legal entity's business, nor exclusive know-how or hardware or network infrastructure. 
\end{itemize}




\subsection{Objectives \statusyellow}\label{sec:objectives}

%\subsubsection{Scope}

When we talk about the 'flow of data,' a core aspect of this is how information has provable integrity across modalities, see table \ref{tab:scope}. This corresponds to the original Ethereum vision of the \gloss{world computer},  constituting the trust-less (i.e. fully trustable) fabric of the coming datacene: a global infrastructure that supports data storage, transfer and processing.

\begin{table}[htb]
\centering
\begin{tabular}{c|c|c}
dimension & model & project area\\\hline
%
time & memory & storage \\
space & messaging & communication \\
symbolic & manipulation & processing \\
\end{tabular}
\caption{Swarm's scope and data integrity aspects across 3 dimensions.}
\label{tab:scope}
\end{table}

 
With the Ethereum blockchain as the CPU of the world computer, Swarm is best thought of as its "hard disk". Of course, this model belies the complex nature of Swarm, which is capable of much more than simple storage, as we will discuss.

The Swarm project sets out to bring this vision to completion and build the world computer's storage and communication. 

\subsection{Impact areas \statusorange}

In what follows, we try to identify feature areas of the product that best express or facilitate the values discussed above. 

Inclusivity in terms of permissionless participation is best guaranteed by a decentralised peer-to-peer network.  
Allowing nodes to provide service and get paid for doing so will offer a zero-cash entry to the ecosystem: new users without currency can serve other nodes until they accumulate enough currency to use services themselves. A decentralised network providing distributed storage without gatekeepers is also inclusive and impartial in that it allows content creators who risk being deplatformed by repressive  authorities to publish without their right to free speech being censored. 

The system of economic incentives built into the protocols works best if it tracks the actions that incur costs in the context of peer-to-peer interactions: bandwidth sharing as evidenced in message relaying is one such action where immediate accounting is possible as a node receives a message that is valuable to them. On the other hand, promissory services such the commitment to preserve data over time must be rewarded only upon verification. In order to avoid the \gloss{tragedy of commons} problem, such promissory commitments should be guarded against by enforcing individual accountability through the threat of punitive measures, i.e. by allowing staked insurers.

Integrity is served by easy provability of authenticity, while maintaining anonymity.
Provable inclusion and uniqueness are fundamental to allowing trustless data transformations.

% \subsection{Requirements \statusred}\label{sec:requirements}

\subsection{The future} \label{sec:future}

The future is unknown and many challenges lie ahead for humanity. What is certain in today's digital society is that to be sovereign and in control of our destinies, nations and individuals alike must retain access and control over their data and communication.

Swarm's vision and objectives stem from the decentralized tech community and its values, since it was originally designed to be the file storage component in the trinity which would form the world computer: Ethereum, Whisper, and Swarm.

It offers the responsiveness required by dapps running on the devices of users and well incentivised storage using any kind of storage infrastructure - from smartphone to high-availability clusters. Continuity will be guaranteed with well designed incentives for bandwidth and storage.

Content creators will get fair compensation for the content they will offer and content consumers will be paying for it. By removing the middlemen providers that currently benefit from the network effects, the benefits of these network effects will be spread throughout the network.

But it will be much more than that. Every individual, every device is leaving a trail of data. That data is picked up and stored in silos, its potential used up only in part and to the benefit of large players.

Swarm will be the go-to place for digital mirror worlds. Individuals, societies and nations will have a cloud storage solution, that will not depend on any one large provider. 

% This is especially important for countries currently lagging behind, such as Africa and Latin America. 

Individuals will be able to fully control their own data. They will no longer have the need to be part of the data slavery, giving their data in exchange for services. Not only that, they will be able to organize into data collectives or data co-operatives - sharing certain kinds of data as commons for reaching common goals. 

Nations will establish self-sovereign Swarm clouds as data spaces to cater to the emerging artificial intelligence industry - in industry, health, mobility and other sectors. The cloud will be between peers, though maybe inside an exclusive region, and third parties will not be able to interfere in the flow of data and communication - to monitor, censor,  or manipulate it. Yet, any party with proper permissions will be able to access it thus hopefully levelling the playing field for AI and services based on it.  

Swarm can, paradoxically, be the "central" place to store the data and enable robustness of accessibility, control, fair distribution of value and leveraging the data for benefiting individuals and society.

Swarm will become ubiquitous in the future society, transparently and securely serving data of individuals and devices to data consumers in the Fair data economy.

