prompt
stringlengths
300
3k
completion
stringlengths
300
3.12k
I wanted the book to be less about what happens in Silicon Valley and more about what happens in our lives. In particular, I wanted it to be about what happens in the lives of people in India and Cambodia and Turkey and Brazil. Often our policy debates about Facebook are all about Trump and the United States and how the Russians invaded and infected our elections. But that’s not the beginning of the story. What the United States suffered in 2016 is nothing compared to what Estonia has had to put up with for the past five years, for example.
How is Facebook eroding democracy in other countries? Let’s start with India. If you look at what Prime Minister Narendra Modi did with Facebook to promote his campaign in 2014, he was building on a long career of using social media. His success taught others of his ilk—like Rodrigo Duterte in the Philippines—that Facebook was a powerful tool for propaganda. He came up with a playbook.
And for them, the law of repeat business reveals its ugly side. “None of this litigation happens in this industry, because nobody wants to be blackballed,” my anonymous lawyer says. Or, as an angel investor puts it, it’s important that even a failed venture “facilitates the founder’s story.” Something similar seems to be true for employees: “I learned a lot” is a story that whoever is hiring, seeding, funding, or advising you on your next undertaking is going to want to hear. “The bastards screwed me out of a bunch of money” isn’t.
That’s the funny part of the tech industry’s narrative about itself. For tech, failure is always assumed to be temporary; for everyone else, it’s terminal. Taxicab companies are going out of business because they’re losing money? Creative destruction, my friend—sink or swim. Uber hemorrhages cash? Well, that’s just a sign of how visionary the company is. This double standard justifies the exploitation of workers outside of the tech industry—and, in certain cases, the exploitation of workers within it.
The prevailing account of computer dating’s origins is the same kind of stylized informational portrait that you might put up on an online dating website. It hides a lot and only shows the things that you think people want to see. Brilliant young men of privileged backgrounds taking a risk by applying machines to a realm about as far away from cold, hard, technological logic as you could get—this makes for a good story, and one which we are primed to hear, because it plays to our cultural expectations. Yet the real story, warts and all, is much more interesting. And it helps us understand why computer dating is what it is today—why we love it, loathe it, need it, and fear it in nearly equal measure.
No Meetcute In 1953 a young woman named Joan Ball stepped out of a mental hospital in England. Her mother had beaten her and ended up abandoning her—to say nothing of verbally and psychologically abusing her. When she ended up in the hospital, she found more of the same. In an era before mental illness was well-understood, and when young women were routinely incarcerated in mental hospitals for everything from sexual misbehavior to hysteria, many hospitals meant to serve the needs of the mentally ill were instead warehouses for people who—ill or not—had somehow stepped out of the bounds of social norms. Joan suffered physical abuse at the hands of her mother, who, it seems clear, was mentally ill herself. After struggling for years with a difficult home life, it was Joan’s eventual refusal to take further abuse that caused her mother to involuntarily commit her. Yet the hospital was so bad that once there, all she wanted was to go home. When she finally got out, however, there was no home for her to return to: she was no longer welcome in her parents’ home, and likely would not have wanted to go back even if they had agreed to take her.
At a typical fulfillment center, certain workers unpack products arriving from manufacturers and suppliers; others stow them amid vast rows of shelves; and “pickers” fulfill orders by grabbing the correct items from the shelves and putting them in a tote, which is conveyed to “packers” who prepare the order for shipment. Workers log their interaction with each product and its location via a scanner, which almost every one of them carries.  If you’re a picker, your scanner tells you the location of the product to be picked and begins counting down the time it should take you to get there. If it takes you longer than the allotted time, the clock starts counting up, recording the amount of time you’ll have to make up later to stay “above rate.” When you arrive at your destination, you scan the shelf or bin, find the item, and place it in your tote. Then you get another location. This process continues until the products needed for the order — or some portion of an order or set of orders — is complete. You set the tote on a conveyor and the process starts again.
The scanner is a powerful surveillance tool. It records your productivity rate — displaying it on its interface — as well as the time between subsequent scans (aka Time Off Task or TOT). If your TOT exceeds fifteen minutes or your rate falls below the prescribed speed for the day, you’ll get a visit from a manager or a write-up. Too many write-ups and you’ll be cut loose. “Rates are used as Damocles’ sword,” Charlie said. “You can be king, but there’s a blade hanging above your head held by a thin hair.”  To encourage competition, managers publicly post a ranking of employee productivity at the end of each day. In some warehouses, there’s a whiteboard; in others, a printed piece of paper or an electronic display. Ashleigh Strange, who worked at a warehouse in Breinigsville, Pennsylvania between 2013 and 2015, said this practice was also a “method of group shaming.” “If you were the worst person in the warehouse,” Ashleigh said, “you’re going to know it. And so will everyone else.” In some warehouses, bottom performers are automatically enrolled in remedial training — or written up.  Management also runs what employees called “power hours,” during which workers are incentivized by raffle tickets or Amazon “swag” to work as fast as humanly possible. “You get an unimportant reward for working as fast as you can,” said Charlie. “Everyone competes. This becomes the new baseline.” Online Amazon worker forums are full of strategies for artificially boosting rates. One worker discovered that managers were basing his productivity numbers on how quickly he started work after a break. By leaving a count loaded in his scanner, he could trick the computer into thinking he had resumed work with a flurry of activity. Others boost their count by rapidly scanning several bins of small items.  These little tricks get shared obliquely, “like hobo symbols,” said Charlie. “A lot of, ‘I don’t do this, but I heard that… ’ or, ‘This is the way I don’t do it.’” These strategies circulate through departments until management catches on, which they usually do. In the meantime, shortcuts and hacks allow for brief reprieves from the relentless pace of the work — sometimes more than a brief reprieve. As one prodigious hustler put it on Reddit, “I get my production really high and fuck around for the rest of the week.”
Above: the original, correctly classified inputs. Below: the incorrectly classified inputs, post-perturbation. The writer Evan Calder Williams defines sabotage as: the impossibly small difference between exceptional failures and business as usual, connected by the fact that the very same properties and tendencies enable either outcome.
Serendipitously, this is the literal mechanic by which the universal perturbation algorithm works. An object recognition neural network, for instance, takes an image and maps it to a point in some abstract space. Different regions of this space correspond to different labels—so an image of a coffee maker ideally maps to the coffee maker region.
The danger of TCE exposure is that it is carcinogenic and can impair fetal development. The chemical penetrated deep into the groundwater as a liquid and then began to evaporate, moving through air pockets in the soil. This migration continued through cracks in the foundations of homes and buildings, creating an indoor environment of prolonged exposure. People who both lived and worked in the plume were called “double dippers.”  When the initial spill occurred, IBM began digging wells. Twenty extraction wells pumped out contaminated groundwater. In 2002, the year IBM shuttered its factories, the New York State Department of Environmental Conservation (DEC) required the company to test the air quality. By 2004, they entered into a “formal consent order” to investigate and remediate the contamination. What IBM found led them to install vapor mitigation systems in homes and other buildings in the plume.  These systems are discernible by the white boxes attached to long pipes that reach up to roofs, rerouting the vapor from underground into the surrounding air outside. Now in homes, houses of worship, billiard bars, and barber shops, there is a constant whir of ninety-watt motors working against TCE. The contamination is continually announcing itself. It is ignorable as a low drone, forgotten and re-heard over and over again.  Endicott is important because it is not unique. It is a story that one can almost write without knowing the specifics. It is a story of the postindustrial long after the last shoe, car, or computer travels through the factory.  The old IBM complex in Endicott. Photo courtesy of the author.
Endicott proves that it is only through extraction, refinement, and manufacturing that computational feats of any kind are possible. The machine is made of materials from the earth: copper, gold, nickel, silicon. In order to purify, clean, and combine its pieces, intensive chemical baths are used. The computers and smartphones that result have an incredibly short working life, on average just two to three years. A shorter life than a car tire, a winter coat, a stereo, or a shovel.  Though compact when presented to consumers, these devices also have a huge material footprint. The inputs for a microchip are 630 times the mass of the final product. After the product is made, all of these excess inputs recombine into new chemical slurries, the unsaleable byproduct of the machine. These life-altering chemicals return to the earth in indigestible ways, and creep through our basements, waterways, genomes. There are 2.71 billion smartphone users in the world, and 1.5 billion personal computers in use. This means there are many towns like Endicott.  Inside the Clean Room I too am from an IBM town, one in northern Vermont, the only in the state. Like thousands of other Vermonters, I worked in the factory there. I didn’t get sick and neither did my immediate coworkers, but I began to hear troubling stories. I also began to read article after article imploring IBM to stay in Vermont. Eventually IBM did leave, but unlike in Endicott, the factory was taken over by a new company and kept running. In Vermont the pollution was quieter. The factory was not classified as a Superfund site, so it did not stick in the public conscience — only in the thoughts of those who worked there. The pollution also remained quieter because the factory is still in operation.  The work itself, twelve-hour shifts in a factory built to protect the product not the people, was dehumanizing. I performed one step in hundreds required to make microchips. The Vermont plant specializes in amplification chips that transmit signals to satellites and enhance the speakers nestled next to our ears on our phones. Twenty-four hours a day, white-clad employees walk up and down the fluorescent hallways of the factory: workers in hoods, gloves, veils, booties, and coveralls so that the eyes are all that is visible. This is to protect the delicate chip from human contaminate.
Of course, human efforts to achieve particular ends by introducing new species don’t always go well. The genre of stories about “invasive species” is one of the most reliable sources of cautionary tales about unintended consequences of human meddling in nature. The East Asian vine kudzu, for example, was widely planted in Southern states to help address soil erosion in the aftermath of the Dust Bowl; it became the weed that ate the South.  But not every story of introduced species is a warning. Rewilding projects, for example, attempt to restore land domesticated and cultivated by humans into ecosystems operating without human presence. This usually means reintroducing species that have been driven out of their former habitats or killed off by human settlement. In some cases, species have gone fully extinct, yet some other kind of creature may be able to do the same work.  In the rewilded nature reserve of Oostvaardersplassen in the Netherlands, for example, the roles of extinct aurochs and tarpans — wild ponies and cattle — are filled by physiologically similar breeds that eat the same grasses and have similar roaming patterns. In the American Great Plains, meanwhile, sustainable cattle raising practices have tried to replicate the grazing patterns of bison. Since cattle tend to roam less extensively, doing so requires more intensive human labor to direct herds.  This raises an important point: natural systems that now operate automatically may require more human labor to function as nonhuman species disappear. Life in Biosphere 2 was a lot of work. As one participant later recalled, “Farming took up 25 percent of our waking time, research and maintenance 20 percent, writing reports 19 percent, cooking 12 percent, biome management 11 percent, animal husbandry 9 percent.” As Biosphere 2’s nonhuman life support systems started to falter, its human inhabitants had to work harder to keep them functioning, from chasing pests that had no predators to pollinating plants when the bees died off. As scientists observed in the aftermath, “Biospherians, despite annual energy inputs costing about $1 million, had to make enormous, often heroic, personal efforts to maintain ecosystem services that most people take for granted.”
Of course, Biosphere 2 didn’t just rely on human labor to help nature function. Its vast technosphere was built expressly to fill in for Earth systems like ocean currents and water cycles. Biosphere 1 now has a rather substantial technosphere of its own, currently constituting 30 trillion tons of human artifacts, from computers to undersea cables to houses to lightbulbs. Most of it supports human life and pursuits. But some of it could also be put to use tending to the biosphere, as in Biosphere 2.  The garden of Eden and the aircraft carrier aren’t our only options. Most of our world is some combination of the two. Using technology to support ecological functions doesn’t have to involve building a giant array of machinery to replace Earth systems or trying to technologically manipulate the entire atmosphere, a la the Ecomodernists. But nor should it mean attempting to remove human activity and artifacts from ecosystems altogether. As Donna Haraway reminds us, “There is no Eden under glass.” Technology can play an important role in actively maintaining ecosystems rather than replacing them wholesale, in conjunction with human labor.  Some of this work is already happening. Drones are being used to reseed land for restorative purposes, effectively performing the work of birds while reducing human presence in remote areas. In the Great Barrier Reef, a robotic vessel protects indigenous coral species by killing the crown-of-thorns starfish that is suffocating the reef.  Paradoxically, these unmistakably human interventions often occur in the absence of actual humans. Robots can offer ways to preserve nonhuman ecosystems without more direct forms of human intrusion. They aren’t total replacements for organisms, of course. A drone can drop seeds but can’t lay eggs; a robot fish can kill starfish but can’t grow new coral. Indeed, none of the options available to us — nonhuman proxies, technological tools, human labor — is a perfect substitute for what they replace, and none ever will be. At best, they can provide rough approximations of certain functions. But these jury-rigged rivets might be our best hope for making a future on a damaged planet.
Autopilot’s enabled for this region, though I’m quick to shut it off. The highway’s riddled with flooding issues now, and I doubt the car could keep up. As we exit the parking lot and get on the road, Aluna and I pass the low, inflated lozenges of the refugee tents, pinned to the soil with near-invisible lines of rope. Nearly biological in their aversion to clean, sharp angles, growing out of the flat campus lawns like massive fungi. Logos adorn the sides, all the entities responsible for erecting this crisis architecture. Centra’s branding appears on the tent as a circle adorned with light rays, a flat design vision of a glowing sun. In the encroaching murk, it can’t be easily defined. The sun seems to waver in the shadows, like a pinned spider flat against the outer wall.
*** There was once a time—probably late ’50s, early ’60s—when vehicular windshield HUDs were an inescapable trend of automotive design. Glowing, transparent skins hemmed glass borders, providing location-specific updates, weather, and navigation tips. By then, every other American had a retina display, though the AR novelty of these car HUDs was pushed as if it were cutting edge. At its core, the overlays are candy-colored, highly restricted web browsers tacked to the front of our vehicles. “Futuristic” was the word that nobody wanted to use, and yet guided every step of the design process. The HUDs exist because, as we near the end of the century, they seem like they should.
So we tried to build a bunch of different apps. None of them found a huge amount of success. But the last one we tried, a social app for making plans with friends, did well enough to get some attention from TechCrunch and other places in the tech press. It was fun, but scary. Because we always knew that if one of the tech giants got their shit together, they could eat our lunch if they wanted to. I remember waking up one day and seeing a new product announcement from Facebook that I was convinced would put us out of business. I ran to my cofounder’s room, freaking out. It turned out we were fine. Still, I had that feeling of “Holy shit, we’re fucked” a lot. We lived in constant fear of getting scooped.
But you didn’t. Not exactly. But after a year, we were still living off our savings. It was clear that we either needed to get funding, get a job, or get acquired. Around this time, we went to an up-and-coming company to ask them whether they would give us access to their private API. They said, essentially, “No way. You’re a tiny company that doesn’t matter.” But they also said they’d be interested in acquiring us. We were very surprised to hear that. We didn’t even really know what it meant. But we decided to pursue it.
What is the Carceral Tech Resistance Network? The Carceral Tech Resistance Network (CTRN) is a coalition of organizers who are campaigning against the design and experimentation of technologies by police, prisons, border enforcement, and commercial partnerships. We work to abolish the carceral state and its attendant technologies by building community knowledge and community defense. Our group is made up primarily of femme, Black, immigrant, POC organizers. My own work is embedded in Los Angeles, the Bay Area, and Portland, Oregon, but CTRN has organizers in most West Coast US states.
The network was created out of two primary needs: first, we started to realize that these technologies, often rolled out at a local scale, have afterlives—they travel to other contexts, where communities may have less familiarity with them, or no organized base prepared to confront and dismantle them. So there was a need to knowledge-share and foster mentorship between community organizations. And second, we felt an urgent need to build a different relationship to the cataloging, databasing, and archiving practices that are widely deployed in movement spaces—but which also share a troubled history with the exact same surveillance technologies we are working to dismantle.  How did you first come to work on these issues? I started thinking about these policing techniques during the Ferguson uprisings. I became fascinated by predictive policing, an object that has captured popular and scholarly attention since its inception. Originally, I had aspirations to be an academic; I took the project of techno-criticism seriously. I described this recently as an impulse to “speak these technologies into illegitimacy.”  Things changed once I started to realize that academic research has a long history of being co-opted—even used against itself—by the particular systems that I was studying. Similar to prison industrial complex abolitionists in the 1980s and 1990s, I started to recognize that criticism was not going to be an effective tactic to enact change. So I started to look for other pathways. A couple of years ago, I came out to Los Angeles and began organizing here. And I realized that once you position yourself as an organizer, change becomes possible in a very different way.
They ended up inviting me to this crazy meeting in Shanghai that they were having. INBlockchain was running the biggest crypto exchange in China at the time, called Yunbi. They took the hundred biggest Yunbi customers and all their portfolio companies to a secret summit at the Park Hyatt in Shanghai. They also asked if we wanted to adjourn afterwards to a secret boat meeting, and I was like, “Definitely.” Ben and I thought, well, it’ll be expensive to fly back to China and we don’t have any funding, but let’s do it. Then when we got home, they had already bought us tickets and reserved a room at the Hyatt. My cofounder and I were kind of stunned.
When we got to the meeting, I gave the pitch for Stream, the project we were working on. I gave it in Chinese, which totally freaked everyone out. My partner Ben is Chinese-American, but he doesn't speak Mandarin very well. It was funny; I was this white kid on stage giving our pitch in Chinese, and my Chinese partner was putting on headphones to hear the English translation. All the people around him are looking at him like, "What is wrong with you?" I gave the pitch and people got excited about it and we ended up moving forward with it. I started helping Xiao Lai and Lao Mao with some of the platform stuff they were doing, and helped them localize their initial coin offering (ICO) platform to the US. I also sent over some suggestions for Yunbi and gave some advice on some of their companies. Eventually Xiaolai proposed that I come onboard as a partner and invest with them, rather than just run one of their startups.
Yet this daily deprivation, while at times made visible through popular protests, is largely suffered in silence in the desolate housing blocks of marginalized zones like Iztapalapa. This reality seems worlds away from the gleaming towers of the financial and political elite whose swimming pools never run dry. Across the city, the luxury real estate market has exploded, with new towers sprouting from the rubble of the 2017 earthquake like mushrooms of concrete and steel.
To add insult to injury, the falling water table has provoked severe land subsidence, causing many of the same problems in the periphery that the city center had faced in the decades prior. This has left sewer lines—carefully constructed to flow downhill—flipping like see-saws or simply broken. With even modest rains, these sewers overflow onto local streets and double or triple already grueling commute times, especially for the poor who live far from the city center. Even when the waters do not rise high enough to enter their homes, low-income residents run the risk of infection and ruined clothes trudging through the sewage from these shallow floods.
A Pinpoint on Google Maps Although Bogdan had listed his Guantánamo experience on his LinkedIn profile, knowledge about most of the guards that return from Guantánamo is extraordinarily hard to come by. This is largely the result of decisions made by the Pentagon, which worked for years to ensure that most guards’ identities, decisions, and actions would not be documented in public archives.
The Guantánamo that emerges online tends to be a pinpoint on Google Maps, a small strip of land through which all kinds of people—private contractors, intelligence agents, soldiers, sailors, policymakers, lawyers, journalists—pass through, disappearing in discourse once they leave the base. Google Images almost inevitably turns one’s attention to the physical detention camp, too. Searches produce a checkerboard of orange jumpsuits, snapshots of current detainees, splatters of camouflage. Over the past decade, the first page of results has evolved to include photographs of protesters in the mainland US, but even those images point back to the camp: many protesters have decided that the best way to remind civilians of Gitmo’s continued existence is to dress up like detainees. In the digital archives of major US newspaper outlets, there is a parallel pattern. Almost all the photographs accompanying stories about the detention facilities show a similar montage: hurricane fencing and barbed wire; American flags and the backs of military personnel; the small, beige-colored trailers containing men deemed too dangerous for US soil.  Political discourse about Guantánamo has also centered on the prison and its detainees. If the Zoomers at UNC Charlotte had looked for Guantánamo on C-SPAN.org, they would have scrolled through decades of videos of congressional representatives, national security lawyers, lieutenants, and journalists regurgitating the same five or six questions. When will Guantánamo close? Where will the detainees go? What might happen to the detention facilities if they are ever emptied of people? What is it like to see Guantánamo with your own eyes? What horrors might befall America if detainees were to be housed and tried in US federal criminal courts instead of military commissions? Wikipedia articles about Guantánamo echo these frames, and have become repositories of contested knowledge about the detained. Footnotes include Supreme Court cases about habeas corpus petitions, links to lengthy Pentagon reports on the detention facilities, memos written by ACLU lawyers arguing for the immediate closure of the prison, and a rich archive of investigative news articles that try to detail human rights abuses at the prison. In much of this writing, the passive voice lurks in the prose, quietly obfuscating precisely who is ordering that detainees be force-fed, who is implementing groin searches, who is doing the detaining.
Prominent tech investors like Eric X. Li have long doubled as defenders of the Communist Party. Alibaba founder Jack Ma’s Party membership came to widespread attention when he was honored by the Chinese government as a “reform pioneer” in 2018. The figures who preside over China’s tech boom aren’t all Party members. But they have all found a way to coexist with the Party-state—even if only uneasily, and perhaps temporarily. This proximity has strongly shaped how Chinese tech has developed. The app economy is built around real-name registration, proof of residence, and so on—requirements that enable state agencies to keep a close eye on users. The most-downloaded app of early 2019 is an ideology instruction program called “Study Xi, Strong Nation,” which Party members and civil servants are required to use to ensure their engagement with Xi Jinping’s thoughts and teachings. If the Chinese internet has enabled new forms of participation, it has also facilitated the detection and elimination of other forms of participation that the Party deems intolerable. The Party-state has decimated the nascent feminist movement and undercut the work of criminal defense lawyers, journalists, and HIV/AIDS activists. The big, nationwide movements get the most attention, but crackdowns extend to localities far from Beijing. In January 2019, the Guangdong provincial public security bureau fined a thirty-year-old man 1000 renminbi for downloading and installing a proxy server on his phone. The site of the most brutal and heartbreaking application of digital surveillance technologies within China’s borders is farthest from the capital, in the western region of Xinjiang. Digitally enhanced monitoring and gulag-style reeducation camps housing as many as one million detainees have devastated Uyghur communities. And there are indications that the technologies and techniques perfected in Xinjiang are designed for export—not only to other parts of China, but also to other countries around the world. In this way, the geography of Chinese technology can move swiftly from the local to the global and back again. These oscillations are one more reason why it is imperative for everyone, not just China experts, to pay attention.
4. As Chinese-built telecommunications infrastructure breaks ground around the world, as Chinese companies buy American apps like Grindr and offer cloud-computing services from Indonesia to Canada, and as WePay and Alipay become accepted everywhere from duty-free shops in European airports to New York City taxi cabs, it has become more difficult to say where the “Chinese internet” begins and ends. The writers in this issue travel across national borders, socioeconomic divides, and possible futures, pursuing glimpses not only of what the Chinese internet is but where it is—and where it may go from here.
In the same way that computers are automated to fail over to the next system, the app will fail over to the next human if one of them is down. Yeah. If I sleep through an alarm, our escalation is set up to try my team members first. Then my boss, then my boss’s boss, all the way up to the executives. If all of us sleep through all the alarms, the CEO would get paged. I’ve never seen that happen before, though.
What has happened is that I’ve been paged for something, didn’t know how to deal with it, and then paged someone else to wake them up to help. How does it feel to do that? I mean, I wish I never had to do that. It sucks, because I know how garbage I feel after I’ve been woken up at that time. But this is actually a place where team culture is important. If someone else wakes me up, I try to respond without resentment and without making somebody feel bad for needing help. We don’t page each other frivolously, but if someone doesn’t know what to do and I’m second in line, it is my job to respond and help that person out. It can create a really toxic culture if you’re like, “Ugh, why did you wake me up for this?” And if somebody stops asking for help, that is a big potential failure scenario. That’s why, when we onboard someone, we really play up the “It’s totally super fine! Don’t worry about it, page me anytime!” They won’t actually page me anytime, but it’s important for them to know that they can if they’re in trouble.
Yet with the digital era, even this bastion of privateness seems to have fallen. The popularization of the internet and social media, and the fact that they are often accessed through portable devices like smartphones and laptops, has unanchored media consumption from specific sites. The bedroom is now as permeated by the public sphere as anywhere else, perhaps even more so—it has become, in the words of the scholar Zizi Papacharissi, “the mobile and connected enclosure of [our] private cocoon.” Indeed, research has shown how bedrooms have increasingly become one of the main sites of digital media use. A 2017 study published in the journal of the American Academy of Pediatrics showed that 75 percent of children and 70 percent of adults report viewing or interacting with screens in the bedroom. This situation has led to couples complaining about the fact that smartphone use negatively impacts their sex lives, as well as growing concern that “doom-scrolling” in bed negatively impacts our sleep patterns. The use of digital media in the bedroom is a reflection of the portability of such media in the era of the smartphone. But it is also a reflection of worrying socioeconomic trends. Rising housing costs and the difficulty to get on the housing ladder means that a growing number of young adults are now living with their parents. According to a 2020 Pew report, 52 percent of US citizens ages 18–29 lived with their parents, up from 47 percent before the pandemic. In the UK, according to the Office for National Statistics, around 42 percent of 15–34 year-olds did the same, compared to 35 percent in 1998. Further, the number of people flat-sharing with friends and strangers has grown, with some reports of a fourfold increase in the 2010s, particularly pronounced among those ages 45–54. This means that, especially in big cities, bedrooms have become mini-apartments: multifunction places where people do not just sleep, rest, and “disconnect,” but where they work, study, and use the internet. For teenagers, meanwhile, bedrooms are the only place in the household where they can engage in online conversations without their parents meddling.
Movement Homes The places where politics take place have important consequences for both its form and its content. So what are the implications of the bedroom becoming a place of politics? At the level of form, we see bedrooms everywhere in online life. They have become the contemporary equivalent of the speaker’s podium. Bedrooms appear as the backdrop in political TikTok and YouTube videos, as well as in political meetings and talks conducted over Zoom. A widely read 2019 report on the climate movement carried a picture of Paul Campion, a Sunrise Movement activist, in his bedroom in a small apartment in Washington, DC with the caption “the organization’s ‘movement home.’” At the level of content, politics from the bedroom has to some extent become a politics about the bedroom. Issues that are associated with the bedroom—from sickness to sexuality, housing, and mental health—have become more prominent in political debates. In the era of social media, it has become easier to make the issues contained in the bedroom visible, by representing them though pictures, videos, and other multimedia material recorded there. Bedrooms and beds have appeared in many recent campaigns. They appear in the campaigns of feminist and LGTBQI+ groups as a means to discuss issues of sexual and reproductive rights, as in #MeToo activist Alyssa Milano’s call for a sex strike, a form of protest that by its very nature is located in the bedroom. Similarly, housing campaigns such as Generation Rent and #VentYourRent launched in London in 2016, as well as the Deutsche Wohnen & Co. enteignen referendum in Berlin in 2021, often featured images of beds and bedrooms, and saw people complaining about unhealthy housing conditions and difficulty sleeping. Some scholars have even coined the expression “bed activism” to describe the various campaigns where bedrooms and the act of sleeping are a key theme.
Once these devices are widespread, it becomes easier to justify accessing the data for prosecutorial, rather than rehabilitative, purposes. The California Department of Corrections and Rehabilitation claims that “GPS has proven to be an effective tool used in supervising offenders who are at high risk of re-offending and where knowledge of their whereabouts is a high priority for maintaining public safety.” What that means in practice is that police within the United States routinely access ankle monitor GPS data to see if wearers were near crime scenes, without the legal process that would be required to gain that information from a third party.
Sometimes such location information is gathered for the express purpose of prosecuting new crimes, unrelated to any rehabilitative purpose. In a recent case out of New York, a judge finally pushed back. Judge Jack Weinstein ordered the suppression of all evidence gathered through an ankle monitor. He found that law enforcement had required a parolee to wear the monitor for an additional two years to gather information for a pending criminal indictment, and that was an unconstitutional violation of the parolee’s rights.
But the boss wasn’t the only source of risk in a remote workplace. Without being able to meet face-to-face with their coworkers, organizers also struggled to build relationships. Before the pandemic, such relationships had sustained collective action campaigns. With the lack of a shared physical space, many of these faded away.
“It’s harder to do this over a video call than it is in person, because in person you’re gonna see them in the office again in the real world, and it continues to humanize and endear you… When it’s a video call, it’s more structured and a little less humanized.” — Employee at a marketing firm Online, organizers had to find a way to recreate the offline spaces that held collectives together. Gig workers were pioneers in this space, as they had been an all-remote workforce since long before the pandemic: “[Gig workers were] this remote, atomized workforce that had no shop floor or break room, no true ‘workplace,’ and really no clear, obvious way of connecting with fellow workers. We had to digitally recreate the infrastructure that exists in a traditional workplace, a place where workers could come together to communicate together and express outrage about things and their relationship with their labor, their relationship with their company.” — Gig worker at an online delivery platform Organizers began creating online spaces for relationship-building on Slack or Discord. Knowhow on community management became essential; for instance, workers who had previous experience moderating large online communities had a major advantage. Codes of conduct, community standards, and other social expectations in these new spaces had to be established and enforced. Certain individuals acted as community moderators, warning colleagues about sharing sensitive details in the chat, as well as working to foster a sense of togetherness.
The inspiration for the app came from Reason Digital co-founder Matt Haworth’s work with Manchester Action on Street Health, a sex worker support service. They keep an ugly mugs list, but it’s not updated fast enough—in the time it takes to produce a physical booklet, or even to push new information to their website, another worker might encounter the same violent client. The immediacy offered by an app could be the difference between life and death. As project manager Jo Dunning points out, “days lost cost lives”.
Reason Digital worked closely with sex workers from the beginning to develop the app. That’s why the background of the app is black—to prevent the backlight from illuminating the worker’s face and betraying what they’re doing. It’s also why the phone’s location data does not feed back into a database—otherwise, the app could very easily be used to track the movements of sex workers throughout Britain. Users are at liberty to sign up with a fake name, and use a phone number or email address they’ve created exclusively for the service.
How automated would that process be? Are we talking about software making recommendations to human traders, or actually executing trades itself? The level of human oversight varies. Among sophisticated quantitative investors, the process is fairly automatic. The models are being researched and refined almost constantly, but you would rarely intervene in the trading decisions of a live model. A number of hedge funds, mutual funds, and exchange-traded funds (ETFs) run on auto-pilot. By contrast, most traditional investors use models to provide guidance rather than to generate automated trading decisions, since its unlikely that they could operationalize a complex trading strategy.
One of the challenges with machine learning is explainability. As the model becomes more complex, it can become harder, even impossible, to explain the results that it generates. This has become a source of concern as public scrutiny of the tech industry has increased, because you have algorithms making decisions that affect people’s lives in all sorts of ways while the reasoning for those decisions remains completely opaque.
Sometimes the tools that we need most urgently are the lowest tech. When cutting-edge fertility medicine fails, you can find counsel and comfort in a decades-old web forum. Sometimes it may be harder for a blind user to master the Be My Eyes app on his iPhone than to call a friend. Sometimes the benefits we get from tools are unintended. If cryptocurrency enthusiasts can’t mine enough bitcoin to pay the heating bill, at least the heat from the mining rigs can warm their house through the winter.  But some digital technologies hurt bodies in predictable ways. The most cutting-edge facial recognition software encodes centuries of racism and helps the police perpetuate tragically familiar forms of racist violence. The technocrat says: Let’s enlarge the corpus of training data! The moral response may be: This tool should not exist.  We tend to think of information as abstract or disembodied. When cattle rancher and Grateful Dead lyricist turned Electronic Frontier Foundation founder John Perry Barlow wrote “A Declaration of the Independence of Cyberspace” in 1996, he declared that the internet was “the new home of the Mind.”  “Ours is a world that is both everywhere and nowhere, but it is not where bodies live.”  Perhaps not. But, as feminist anti-racists quickly pointed out, if the internet was the Empire of the Mind, not everybody could afford the ticket there, or had the right entry and exit visas. If you had the kind of body that put you on the wrong side of redlining, historically, you could not bring computers into your school district through a sheer act of will.  Moreover, the internet itself has a body that needs care and protection. Many of the most dangerous threats to networks come from physical errors: fat thumbs on the keyboard, a USB stick a startup employee is foolish enough to insert. Late at night, the graveyard shift arrives to guard the servers.
3/ In the past decade, cheaper and smaller computers have brought cyberspace into meatspace. Now that the internet is everywhere, it is not only tracking our bodies. It is changing them, too.  What even is a body? It may seem like the original ground truth. But the word is also a metaphor. It can mean anything we are supposed to see as one thing, anything we want to hold together. As in: A body of land. A body of knowledge. The body politic.
For example, from what we know about the companies operating in the oil market in Uganda, we can reasonably assume that they’re employing some security firm like NSO Group to spy on the activists. [Eds. NSO Group is an Israeli spyware company that sells surveillance software to governments.] And we know that the Ugandan government is monitoring them. Knowing those specifics helps us explain to activists how Tor can help. We also spend time conducting user tests, asking activists to complete a series of tasks using Tor Browser, and documenting what was difficult.  Beyond the fairly unique threat models that people are working with, it's helpful to know how internet infrastructure impacts Tor use and performance in different places. For example, using Tor in San Francisco is slower than using Chrome in San Francisco because, in order to anonymize your traffic, Tor bounces your web requests to relays all over the world before it returns the website you want. So we figured that a slower internet connection would make Tor even slower. How much slower? In Uganda, it turns out: not much! The electricity goes in and out, so you get disconnected, but that impacts every browser the same way. It's also helpful to know how much data Tor is using and how much mobile data plans cost; a lot of the people we met there are primarily browsing on their phones and we don’t want people to have to use up their whole data plans. We learned about censorship by internet service providers there and which anti-censorship mechanisms that are built into Tor will work in those contexts. There’s all sorts of feedback that's coming to us in those meetings. We take all this information and write up a report for the developers and the user experience team so that they can address any challenges that arose.  Socially Necessary Library Time You’ve said that the discussion about privacy is really a discussion about power. The fact that Tor solves a similar privacy problem for environmental activists in Uganda and also for librarians in the US seems to highlight that.
Absolutely. That's the common thread in all of this, and that's my approach with Library Freedom Project. To me, privacy is the Trojan horse. Then when I get in the building, I'm like, “Alright, let's actually talk about capitalism.” Privacy is important, but what we're really talking about is the newest frontier of exploitation by capital.  With both Tor and the Library Freedom Project, you're making libraries places that are anti-capitalist not just because they're free, but also because data harvesting won't work in there. You're making a Faraday cage around libraries.
Today, age discrimination is a central feature of the Valley. But another thing to remember about the Valley is that people tend not to live there forever. They migrate in and out. I often think of the Valley as an island. I believe 40 percent of its residents at the moment were not born in the United States. People come to the Valley for ten years and then they go back to their home country and start a firm. It’s a long-term migrant spot. It’s not like my hometown, where people have been there for three generations.
So do you think Silicon Valley’s obsession with youth is driven more by economic imperatives than the cultural residue of the 1960s? Our society tends to give permission to younger people to do certain kinds of experimenting that also happen to be really valuable inside the tech world. So, for example, we give our young people permission not to get married or have kids until they’re in their mid-thirties. That gives you your whole twenties to live in tech dorms, to try stuff out, to do things that my grandmother would have considered screwing up. My grandmother wanted to get married by twenty-seven. She was committed to that. And she wanted to have stability. She wanted to buy a house. She wanted to grow her family. She had a very particular vision of the progress of life.
The night of the action, everyone on the shift was talking on the company buses on the way to work. The idea was to do something in the last hour, during our obligatory overtime, but it ended up starting much earlier. The slowdown took place mostly in the Pick department. The pickers picked one item from the shelves for each tote instead of the usual twelve or fifteen. Sending the boxes to the Pack department like that made a mess of  the conveyor belts; thousands of these mostly empty Amazon totes were falling from the belt, which then brought the Pack and the Ship departments to a standstill.  It didn’t take hundreds of people. It was really clever to recognize that the Pick department is a choke point. Some people say that the Dock or Ship departments are the choke points in the warehouse since, when you do a labor action in the Ship department, you block trucks from leaving. But this was in Pick.  Pick is where they send people who join Amazon on short-term contracts from temp agencies because they can train a picker in a few hours. That’s what was unique in this action, that these workers who don’t have special training—they weren’t, you know, forklift drivers—understood how to shut down a warehouse. So it was amazing, this popular wisdom. It showed us that we don’t need a labor sociologist to tell us “do it this way” and that we don’t have to limit ourselves to the restrictive legal frames of labor and union law.
What happened after that?  Retaliation. Amazon interviewed about ten workers and some of them, under pressure, signed a statement saying they took part in the action and regretted it. Amazon only stopped when we made their interrogations public. We defended a woman in court who was fired afterwards—or, she was not technically fired, but her contract was not renewed. One permanent worker was also fired and we’ve been fighting that in court for the last four years.  After the action, our union entered into a formal “labor dispute” procedure where we brought our demands to negotiations with Amazon management. It was not very useful. Our union believes that actions, not negotiations, are the best way of talking to Amazon.  In the years of organizing since then, we’ve had ups and downs. There have been other important actions, and one of them happened last summer. Five thousand workers took part in a strike vote. That’s still not enough to win the right to a legal strike because you need 50 percent of the whole company. To give you a sense, there are about 8,000 workers including temps in our warehouse, and there are nine warehouses in Poland. So while we didn’t get enough votes this time, there’s an army of people in Poland who did vote to strike. That’s what we think about work conditions in the warehouses.  Still, it’s difficult because of the permanent turnover—people joining the warehouse and then leaving, and more people on short-term contracts who don’t have labor protections, so their contracts are not prolonged if they’re going on sick leave, not meeting rate, or are open union members.  That seems like difficult terrain on which to build long-term organizing relationships. How does your union adapt to Amazon strategies like short-term contracts or their “employee forum,” which sounds kind of like an internal company union?  You have these employee forums all over Europe. Amazon uses theirs to advertise that they have “very good contact with the workforce” and “eight ways of communicating with employees.” One is the employee forum, another is a board at the company that any worker can send questions to and the board will answer. They have opinion polls every day! They’re really proud of this.
That said, it depends on the platform. Content moderation doesn’t have a one-size-fits-all solution. Take Twitter. Rose McGowan was a good example of what automatic flagging looks like in terms of doxxing. They figured out what a phone number looks like: it’s a parenthesis, three numbers, end parenthesis, three numbers, dash, four numbers. That’s easy to detect.
At Twitter, they’re trying to fix things with code first, instead of hiring smarter people. And that can be really problematic. I don’t think Twitter knows enough about harassment. They’re trying to solve problems in a way that they think is smart and scalable. To them, that means relying on autonomous systems using algorithms and machine learning. It means fewer human-powered systems and more tech-powered systems. And I don’t think that’s the best way to go about solving a problem as contextual as harassment, especially if you don’t have good examples that explain how you’re even defining harassment.
To identify whether someone was such a patient, and what treatments should be made available, Benjamin built his book around his Sex Orientation Scale, often known simply as the Benjamin Scale.  The Sexual Orientation Scale from Harry Benjamin’s The Transsexual Phenomenon (1966). Courtesy of the Digital Transgender Archive.
A doctor using the Benjamin Scale would first work to understand the patient, their life, and their state of mind, and then classify them into one of the six “types.” Based on that classification, the doctor would determine what the appropriate treatment options might be. For Type V or Type VI patients, often grouped as “true transsexual,” the answer was hormones, surgical procedures, and social role changes that would enable them to live a “normal life.”  But this scale, as an instrument of measurement, came with particular assumptions baked into it about what it was measuring, and what a normal life was. A normal life was a heterosexual life: a normal woman, according to Benjamin, is attracted to men. A normal life meant two, and only two, genders and forms of embodiment. A normal life meant a husband (or wife) and a white picket fence, far away from any lingering trace of the trans person’s assigned sex at birth, far away from any possibility of regret.  Further, it meant that trans women who were too “manly” in bone structure, or trans men too feminine, should be turned away at the door. It meant delay after delay after delay to ensure the patient really wanted surgery, advocating “a thorough study of each case… together with a prolonged period of observation, up to a year” to prevent the possibility of regret. It meant expecting patients to live as their desired gender for an extended period of time to ensure they would “pass” as “normal” after medical intervention — something known as the “real life test.” Ultimately, Benjamin wrote, “the psychiatrist must have the last word.” So to Benjamin, a “true” trans person was heterosexual, deeply gender-stereotyped in their embodiment and desires, and willing to grit their teeth through a year (or more) of therapy to be sure they were really certain that they would prefer literally anything else to spending the rest of their life with gender dysphoria.
When Lange and Lerner were writing, modern digital computing didn’t exist. But at the end of Lange’s life, as computers emerged, he discussed the possibility that they could perform this price-guessing work far better than humans. This line of thinking has been taken up by contemporary digital socialists, who point to developments in applied mathematics as evidence that we could do away with the price system, calculating optimal allocations of resources with advanced forms of programming instead.  After all, we have more data than ever before, as well as an unprecedented amount of processing power with which to perform computations on that data. Gigantic firms like Walmart and Amazon are already using advanced algorithms to put all this data to work to plan their internal operations. So, can the promise of algorithmic socialism finally be fulfilled? Not so fast. Advocates of algorithmic socialism misunderstand Mises’s position in the socialist calculation debate, and thus fail to respond adequately to his criticisms. For Mises, the challenge is how to allocate intermediate goods to producers of final goods. That’s not something companies like Walmart and Amazon do, for the simple reason that these companies distribute goods rather than make them. The firms supplying pencils to Amazon and Walmart still rely on market signals to figure out the best way to make their product.
As Mises’s student Friedrich Hayek later emphasized, an economy is not a set of equations waiting to be solved, either with a capitalist price system or a socialist computer. It is better understood as a network of decision-makers, each with their own motivation, using information to make decisions, and generating information in turn. Even in a highly digitally mediated capitalist economy, those decisions are coordinated through market competition. For any alternative system to be viable, human beings still need to be directly involved in making production decisions, but coordinated in a different way.  As Hayek observed, running a business involves practical reasoning, acquired through years of experience. To reproduce the work of the manager of a pencil factory, a planning algorithm would have to know not only about the supply and demand for each type of graphite used in pencil making, but also about the detailed implications of choosing one type of graphite over another in that particular production location, with its specific machines and workforce. It is possible that one could formalize all of this knowledge into explicit rules that a computer could execute. However, the difficulties involved in articulating such rules across all workplaces, in all sectors, are simply staggering.
— Del Harvey, Vice President of Trust and Safety at Twitter, from “Protecting Twitter Users (Sometimes from Themselves),” March 2014. Scale and size, though, are not the same thing at all. If we think about simple massification, we might talk about the difference between a single cell and ten million cells, but that’s not the same as talking about a human arm. Scale is a dimension along which different objects arise; the arm exists as an object at one scale, the cell at another.
—Paul Dourish, The Stuff of Bits: An Essay on the Materialities of Information (2017) Social media platforms must moderate, in some form or another. They must moderate to protect one user from another, or one group from its antagonists. They must moderate to remove offensive, vile, or illegal content. Finally, they must moderate to present their best face to new users, to their advertisers and partners, and to the public at large.
US thinkers generally welcomed these moves, expecting them to erode the power of the Chinese Communist Party. In March 2000, after the United States reached a deal with China that would lead to WTO accession, President Bill Clinton said Chinese leaders “realize that if they open China's markets to global competition, they risk unleashing forces beyond their control… But they also know that, without competition from the outside, China will not be able to attract the investment necessary to build a modern, successful economy.” American officials believed that greater integration with global markets had the potential to push China toward political liberalization.
They believed that the internet would have a similar effect. In the same March 2000 speech, Clinton famously quipped that trying to crack down on the internet was “like trying to nail Jell-O to the wall.” “We know how much the internet has changed America, and we are already an open society,” Clinton observed. “Imagine how much it could change China.” China, of course, was not an open society, and its government would go on to develop powerful, carefully calibrated methods to control the way the internet operated there. Over the past twenty years, Chinese officials have been remarkably successful in reaping the economic benefits of both the internet and global markets—with significant costs to workers and the environment—while managing the threats to domestic stability and continued Communist Party rule. More recently, however, the Chinese government has moved to assert even greater control. Since Xi Jinping’s ascendency to the Communist Party leadership in 2012, it has become clear that Chinese leaders perceive some forms of international economic entanglement as risks to the political system, and that they see closer state supervision of the internet as essential to preserving their power. Despite American idealism about the liberating power of free trade, free markets, and the internet’s open architecture, neither joining the WTO nor getting online created a free society in China. It’s easy to understand why. China’s government, market, and society were not passive recipients of influence from external rules and principles. Instead, they actively adapted the structures of international connectivity to serve their own purposes. Life in China is indelibly transformed by the links forged in the late 1990s, but not in the ways that Americans observers expected.
What would public options look like in a technological context? Municipally owned broadband networks can provide a public alternative to private ISPs, ensuring equitable access and putting competitive pressure on corporate providers. We might even imagine publicly owned search engines and social media platforms—perhaps less likely, but theoretically possible.
We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.
Virtual Mrs. Dodd Maryann’s experiment with techno-care occurred against the background of significant national investments in nursing. In 1960, the US Public Health Service created a new Division of Nursing tasked with improving patient care, increasing the number of nurses, and ensuring better nursing education. In 1963, the Surgeon General’s office published the report Toward Quality in Nursing which identified, among other problems, too few nursing educators, too few new nursing students, and an inadequate nursing education system. Maryann realized her experimental nursing course could be positioned as an efficient technological solution to these problems, training nurses faster and more cheaply than traditional nursing courses. In 1964, Congress enacted the far-reaching Nurse Training Act, designating the substantial sum of $283 million (approximately $2.3 billion in 2020 dollars) over five years to nursing education. The Nurse Training Act funded the expansion of Maryann’s PLATO project to develop a complete course on maternity nursing and a series of lessons on pharmacology.  The reliance on a single “typical” patient continued. The maternity nursing course focused on the virtual Mrs. Dodd, a secretary. Its twenty-two lessons “emphasized the normal, and presented problems which required knowledge of the normal as a basis for recognition of and action concerning the abnormal.” Students learned that “Mrs. Dodd suffers from many of the common discomforts of pregnancy,” including nausea and swollen feet. And just as it was with the “typical” heart attack patient, the way “normal” Mrs. Dodd responded to therapeutic care was contingent on how PLATO had been programmed.
That programming was based on the standard of care for pregnancy in the 1960s, which was developed for, and applied to, white women—a bias that reinforced the invisibility of Black women to the medical establishment. (At many hospitals, including Mercy, the nurses, too, were overwhelmingly white; according to an archive at the University of Illinois, among the hospital’s hundreds of graduates until it closed in 1970, there were only ever six Black students.) For example, in the PLATO course, nurses monitored virtual Mrs. Dodd throughout all three trimesters of her pregnancy, as well as labor and delivery. But many Black women, then and now, lack sufficient access to and insurance coverage for complete prenatal and postnatal care; nurses exclusively trained to care for patients like Mrs. Dodd are poorly prepared to care for these women. Indeed, in the past few years, prominent Black women including writer and scholar Tressie McMillan Cottom and tennis superstar Serena Williams have called attention to how they and other Black women are dangerously mistreated during pregnancy, labor, and delivery. As Cottom recently wrote in Time: “In the wealthiest nation in the world, black women are dying in childbirth at rates comparable to those in poorer, colonized nations.” Though severely limited, Maryann’s nursing course was nevertheless a success—in part because it reflected the limitations of the surrounding medical establishment. All of the students who completed the PLATO maternity nursing course later passed the Obstetric Nursing portion of the Illinois State Board examinations; the biases encoded in Mrs. Dodd were the same ones written into the exam. During the remainder of the 1960s, hundreds of students at Mercy Hospital School of Nursing and nearby Parkland Community College completed PLATO nursing lessons, thus inscribing the biases into their own care.
Do landlords do these kinds of evictions in order to get rid of rent control in their buildings? Erin: Indirectly, yes, because they can rent and sell the units for more money if there’s no rent control. In San Francisco, after a landlord uses the Ellis Act to evict tenants from a rent-controlled building, the building will often then get sold as multiple “tenancy in commons,” which will still have rent control—but then they will get converted into condos. And when that happens, the building loses rent control. That’s one of many loopholes. In short, what we’re seeing with Ellis Act evictions and Owner Move-In evictions is that we’re losing effective rent control, which, in the case of condo conversion, is a nonrenewable type of protection.
Azad: I was thinking we could add a feature to EvictorBook that shows how much a landlord profited from an eviction. We have the sale price before and the sale price after. Erin: Oh, yeah, that would be great to see. Azad: It wouldn’t be hard to do. We have all this data that we’re actually not spending much time analyzing. We know who the evictor of a given building was—and not just who evicted that one unit a few years ago, but looking back over fifteen, twenty years of evictions. That, combined with the networks of LLCs, shows not just ownership structures but also the evictor structures in the history of housing, which is the history of gentrification and the history of displacement in neighborhoods that are financialized. There are so many questions we could use this data to think about.
For this reason, Joan had a devil of a time advertising her new endeavor. No respectable newspapers wanted to publish ads for this unseemly type of establishment. So Joan used her creativity and went one better than print media. The people likely to use her service would already be somewhat edgy, and somewhat marginalized, so why not meet them where they were? That was how, in the mid-1960s, her ads ended up sailing across the airwaves, over the water, and illegally coming into the United Kingdom.
In the 1960s, all news media—and, by extension, entertainment—was regulated by the British government. In part an artifact of the war, and in part the historical result of a strong centralized government, media regulation restricted not just what was said on the BBC, but also what was sung and played. This meant no rock ‘n’ roll on the radio—or anything else that would be offensive to the (imagined) British public.
But it’s also an open question as to whether those kinds of resources actually have substitutes. Plastic chairs can substitute for wooden ones, or plastic bags for paper — but can you build a substitute for an entire forest? Can human technologies or human labor substitute for the nonhuman work done by other organisms? Or are there certain kinds of work that only nature can do?  Today’s substitution optimists remain bullish. A group called the Ecomodernists, whose members include famed cultural entrepreneur Stewart Brand, geoengineering researcher David Keith, and Breakthrough Institute founders Ted Nordhaus and Michael Shellenberger, has taken up Brand’s famous injunction from the Whole Earth Catalog: “We are as gods and might as well get good at it.” Despite the signs of destruction all around, they assure us that human powers can yet be channeled to produce a “good Anthropocene.”  In their view, resource scarcity isn’t a problem: the Ecomodernist Manifesto of 2015 declares that “substitutes for other material inputs to human well-being can easily be found if those inputs become scarce or expensive.” There are no real limits to growth: the sun provides more energy than we can hope to use, and any other given physical resource can be replaced with something else. That implicitly includes nature’s reproductive functions. Carbon capture-and-storage technologies can replace a forest’s capacity to absorb carbon. Injecting aerosols into the sky to make clouds more reflective mimics volcanic eruptions that spew sulfur into the atmosphere, helping to cool the earth.  If Ecomodernists represent one extreme, the other end of the spectrum is occupied by those who spurn any kind of substitution. “Deep ecologists” see all of nature as intrinsically valuable: it’s simply impossible to substitute for the unique and irreplaceable value of any given organism. For other ecologically minded thinkers, including proponents of “degrowth,” the prospect of substituting technology for complex natural processes that we don’t even fully understand is a typical demonstration of human arrogance, one that’s certain to result in unintended consequences. In this view, technology is synonymous with the “techno-fix,” a futile attempt to avoid deeper social and economic change through innovation.
Neither of these positions is satisfying. It’s true that the Ecomodernists are wildly optimistic about human capacities and willfully obtuse about their limits. But it’s not enough to smugly tut-tut at human hubris while the planet burns. Given how quickly the effects of climate change are materializing, even drastic decarbonization is unlikely to stop more mass die-offs and other forms of ecosystem dysfunction. We should hope that at least some ecosystem activities have substitutes, even if they can’t be perfect ones.  In her 1970 book The Dialectic of Sex, best known for advocating artificial wombs as a substitute for biological ones, the feminist thinker Shulamith Firestone also called for a revolutionary ecological program. Such a program should seek to seize “control of the new technology for human purposes, the establishment of a new equilibrium between man and the artificial environment he is creating, to replace the destroyed ‘natural’ balance,” she wrote.  Firestone, to be sure, had too much confidence in the possibility of liberation through technology, and too much fondness for the project of dominating nature. We have yet to automate human reproduction, and we’re similarly unlikely to exert total technological control over Earth’s reproductive functions. But we should nevertheless take seriously Firestone’s impulse to see technology as part of the project of making a liberatory and livable planet rather than aiming for an impossible return to a natural balance that’s everywhere in shambles and that in any case was never so harmonious as we imagine. We don’t have to build the equivalent of an artificial womb for the entire Earth. But we should think about how to use our technologies for purposes both human and nonhuman, in a world where nature and human artifice are now so thoroughly entangled as to be inseparable.  The story of Biosphere 2 offers a way of thinking through what that might look like — both its possibilities and limitations.
My sources are quite broad because the project is not a finite and finished one, but rather an open framework for addressing the multiple layers of the molecular colonization that currently traps us and the planet in a collective mutagenesis — mutation to our bodies, sex, gender, fellow non-humans, and environment.  How did you start to work on your open source estrogen project? What materials did you need? What kinds of collaborators, if any?  Initially I collaborated with a Canadian artist Byron Rich who introduced me to the possibility of an open source birth control pill that contains primarily estrogen or progesterone. Although we are far away from an open source platform for producing hormones, through this journey I was able to form collaborations with many others in the open source community, such as Paula Pin from Transhackfeminists and Gynepunk Lab; Ryan Hammond, who is working on Open Source Gendercodes; Spela Petric, who I collaborate with in a collective called Aliens in Green; and most recently with the Lifepatch citizen initiative based in Yogyakarta, Indonesia, where we investigate various strategies for addressing the most polluted river in the city. The materials of our bio hack sessions are usually low-cost and easy to find, and have ranged from transgenic yeast biosensors to silica gel, urine, cigarette filters, methanol, plastic, and of course, hormones.  Why estrogen, in particular? Are you working on other hormones, or would you consider doing so? Estrogen is interesting because not only does it code for our social and cultural ideas of “femininity” and regulate so much of our basic endocrine function in the body (for example, reproductive development, mood, and metabolism), but it can also be mimicked by hundreds of other toxic industrial molecules of our late capitalist efforts, molecules we call “xenoestrogens.”  Some popular examples of these molecules are plastics — BPA and phthalates — synthetic hormones, PCBs, dioxins, pesticides, and soaps. For decades since the 1930s, these molecules have caused much of what Rob Nixon called the “slow violence”: environmental degradation and the marginalization of bodies and communities. There have been severe population declines in certain marine vertebrates because they can no longer reproduce, and humans as well are directly affected. The question is how our cultural notions of sex, gender, and reproduction will shift if we are surrounded by molecules that mutate our bodies and physiology. Ultimately this is an issue of body sovereignty and agency. Toxicity is never consensual!
What materials does a person need to make open source estrogen? How much knowledge of chemistry, and how much of code? It is currently out of reach for the average citizen to make estrogen in the kitchen. But even if it were possible, there are many risks involved with dosage and purity. Nonetheless I collaborated with two trans-femme artists, Jade Phoenix and Jade Renegade, and the production team Orgasmic Creative to make the short film Housewives Making Drugs, a speculative fiction piece that performs a urine-hormone extraction protocol as a way to make DIY hormones for you and your trans community.  Although based in both fiction and in reality — the protocol originates from some of my estrogen geeking sessions where we extract hormones from the urine by a column chromatography method using cigarette filters, silica gel, and methanol — I wanted to show the possibility that we can create alternative pathways to access our own health, especially as marginalized people who don’t usually have a voice in the scientific or medical community. At the same time, the film shouldn’t take away from the already long and enormous efforts by the LGTBQ community for gaining greater access to hormone therapies.
Coding the Invisible Hand Ever since Thomas Hobbes portrayed civilized man trading obedience for protection to escape a perilous “state of nature,” security has been central to the liberal political tradition. Government, Adam Smith proclaimed, exists “for the security of property.” Similarly, John Locke insisted that the reason men put “themselves under Government is the Preservation of their Property.” Yet in Locke’s view, not all property deserved to be preserved: he defended the British seizure of Indigenous territory in the Americas. The question, then as now, is who and what is being secured—and at whose expense.
Today, market logic so suffuses the concept of security that the term literally means property, like the security deposit you make before signing a lease or the “securities” owned by the affluent. It was these ironically-named securities that brought down the global economy in 2008. Traders, using algorithms that coded Black borrowers and homeowners as particularly exploitable, gambled with securitized mortgages boasting inflated ratings. At the same time, the multibillion-dollar “lead generation” industry, which uses digital tools to compile and sell lists of prospective online customers, enabled lenders to identify potential subprime borrowers. This process, experts say, “played a critical, but largely invisible, role” in the mortgage crisis. In the end, nine million families saw their homes foreclosed on, wiping out half the collective wealth of Black families nationwide, further devastating deindustrialized cities like Detroit. With a few strokes of a keyboard, modern bankers caused dispossession on a scale that put the landowners of the original enclosure movement to shame.
December 2000 – There are 22.5 million internet users in China. May 2003 – Alibaba launches Taobao, bringing everyday online shopping to consumers and connecting small shops to a broader market. Alibaba, with its intense focus on China’s specific needs, out-competes eBay, which eventually leaves the Chinese market.
2003 – Severe acute respiratory syndrome (SARS) outbreak. The spread of SARS news online, combined with the government’s efforts to cover up the severity of the outbreak, demonstrated the ability of internet and mobile phone networks to break the propaganda authorities’ hold on a major story. 2004 – Alipay launches, facilitating online payments. By holding payment in escrow until a buyer is satisfied with the product, Alipay gives buyers confidence in online platforms.
So when the APA’s Task Force on Gender Identity and Gender Variance met in 2008 to write a report updating its approach to questions of gender, one would expect them to take these challenges seriously. And they did address them: in a footnote, after reasserting the validity of the 1970s numbers. Not only that, but the footnote in question mentioned Conway’s work only in order to dismiss it because it “seems to represent a minority position among researchers, although transgender activists tend to endorse the study.” No methodological challenge; no claim the study was incorrect. Simply wholesale dismissal. Why? Because if the answer were true, the medical researchers would have spotted it before Conway did. Because if trans people agreed with the estimate, it was automatically suspect.
The reason for this response is fairly obvious: power. Professionals in trans healthcare—particularly at GICs—get a lot of their official power and authority from the perception that they are singular experts in all things trans. The Clarke Institute in Canada, for example, was for the longest time the Canadian government’s sole source of expertise on (among other things) how trans people should be treated in prison. And if those experts admit they can’t even be trusted to count—well, what can they be trusted on? Accurate or not, published or not, Conway’s study challenged clinicians’ authority. It’s interesting to note that in the newest version of the Diagnostic and Statistical Manual (psychiatry’s bible), the APA did provide a higher prevalence (and a caveat that this was likely an undercount), but credited GIC-based researchers with this discovery.
What did you do if you were stuck on something outside of class? No one was there to help. Could you all work through it together? What was that problem-solving like? We would just have to theorize about things and maybe write them down. Some of the guys actually had their own study sessions in the gym. They would meet, draw stuff out on paper, and figure it out that way. And then hope there wasn’t a lockdown, so they could get back to the computer and test it to see if it worked.
Did lockdowns happen frequently? There’s a lockdown right now. I would say it happens two or three times a year. You just never know when—or for what. This time they said one of the officers lost a bullet. I call it the magic bullet because it seems to happen about once a year. All of a sudden a bullet is missing, and everybody needs to be locked down.
How do you see crypto mediating trust in China? In China, crypto provides two things that are really nice. The first is to provide automatic transactions that denies parties the opportunity to cheat. I can swap thing Y for thing X, and it just happens, with no counterparty risk. The second is that crypto transactions happen on a transparent ledger, so you can audit what's going on. To the extent that you could move some organization entirely onto a blockchain, you'd have perfectly auditable books forever.
The Chinese government is excited about blockchains, and that’s part of the reason why. There is a huge problem in China with corruption, and blockchains bring increased transparency. What’s your take on all the big companies like Alibaba getting into the blockchain space? It strikes me as a classic innovator's dilemma. I’m a little skeptical because I don't know if they're actually going to be willing to cannibalize their main businesses to fully embrace this new thing. There are a lot of smart engineers working at those companies, so I can see them coming up with something interesting. That’s a good sign—the more smart programmers you have looking at the problem space, the better.
A young securities analyst named Arthur Rock tried to find funding for them. He pitched thirty large aerospace and electronics companies. All refused. The only place they could secure financing from was a failing aerial camera manufacturer named Fairchild. Its founder, Sherman Fairchild, was an eccentric bachelor who happened to be a prolific inventor, heir to the IBM fortune, and sympathetic to their cause.
They eventually closed the deal. Those eight engineers—the “Traitorous Eight”—would go on to form Fairchild Semiconductor through a $1.38 million loan from Fairchild Camera for their first eighteen months of operation. And Rock would in turn become one of Silicon Valley’s very first venture capitalists. He would later move to the Bay Area in 1961 and realize $90 million of proceeds from an initial $3 million investment in companies like Teledyne and Scientific Data Systems. Meanwhile, the alumni of Fairchild Semiconductor—or “Fairchildren,” as they were called—would go on to include Intel cofounders Robert Noyce and Gordon Moore, Sequoia Capital founder Don Valentine, and Kleiner Perkins cofounder Eugene Kleiner.
Using the language of open-source programming communities, g0v claims that its goal is to “fork the government.” Digital Minister Audrey Tang, who is herself a veteran of the Sunflower Movement and a longtime contributor to g0v, explained the concept to me. Essentially, it means that g0v hackers produce alternative versions of government websites. “For each government website—which always ends in ‘gov.tw’—that they don’t like, they just change the ‘O’ to a ‘0’” in the domain name and create their own, Tang said. In the process, the g0v community has implemented a wide range of digital tools designed to increase popular participation in policymaking, from online platforms for circulating petitions to data visualization dashboards that help citizens understand how budgets are allocated.
When the pandemic began, g0v responded creatively, using its “fork the government” model to help the authorities contain the virus. Perhaps the best known example is the collection of digital tools—apps, maps, chatbots—that g0v hackers created to make it easier for the public to buy masks through the government’s mask-distribution system.
So I think a lot of the strong AI stuff is like that. A lot of data science is like that too. Another way of looking at it is that it’s a bunch of people who got PhDs in the wrong thing, and realized they wanted to have a job. Another way of looking at it—I think the most positive way, which is maybe a bit contrarian—is that it’s really, really good marketing.
As someone who tries not to sell fraudulent solutions to people, it actually has made my life significantly better because you can say “big data machine learning,” and people will be like, “Oh, I’ve heard of that, I want that.” It makes it way easier to sell them something than having to explain this complex series of mathematical operations. The hype around it—and that there’s so much hype—has made the actual sales process so much easier. The fact that there is a thing with a label is really good for me professionally.
Continuations Khadijah and Xiaowei will be building upon the foundational infrastructure and editing that has been a labor of love by a network of people over the past six years. This includes: Jim Fingal, Christa Hartsock, Ben Tarnoff, Moira Weigel, Celine Nguyen, Jen Kagan, Alex Blasdel, Sarah Burke, Max Read, Aliyah Blackmore, Jacob Kahn, and many others. The last issue of 2022 is appropriately themed “Pivot.” This issue marks a turning point for Logic, and is about transitions of all kinds. It features reflections on the first six years of Logic by readers, contributors, and members of the core team.
It seemed like half of Los Angeles had turned out for boat tours at the Port of Long Beach: parents corralling toddlers, couples on dates, even dog owners in line for pet-friendly tours. The port offers free guided tours to the public once a year, a sort of goodwill gesture to the community that has suffered decades of pollution as a result of its activity, and that of the adjoining Port of Los Angeles. After two hours of waiting, I filed onto an erstwhile whale-watching tour boat, where I took in the port’s enormous container ships. Like my fellow tourists, I was excited to get a glimpse of the scale of operations necessary to keep the nation supplied with toilet paper, plastic toys, and every other conceivable good. Squinting against the sun, I tried to imagine the ships another way: as numbers on a screen, cells in a spreadsheet, dots on a grid. I’d been reading about the information transfer that accompanies the movement of these vessels, and I knew that the scale of this data is nearly as impressive as the ships’ sheer size. Ships like those docked at Long Beach are vital links in the global supply chain, but they’re also floating “data terminals,” as the global maritime industry consultancy Lloyd’s Register put it in 2015. Increasingly, these vessels receive and transmit an enormous amount of information: about their position, of course, but also about weather, traffic, temperature, maintenance, staffing, ocean conditions, and much more. The streams of information are so complex that they threaten to exceed humans’ ability to interpret them. That’s partly why many newer vessels—“smart ships,” in industry parlance—use complex algorithms (some of them devised by Google and Microsoft) to chart their courses. Within the next decade, carriers hope to launch fleets of automated or remote-controlled vessels—“ghost ships,” as they’re sometimes called.
The flip side is that the entire financial industry also has an incentive to encourage people who don’t know as much as them to give them money to do all the things that ordinary investors don’t know about. “Give me money to use a machine learning technique to manage your money, even if the machine learning technique doesn’t work, because it’s very profitable for me to take 2 percent of your fund every year.” So the incentive to make the market more efficient is balanced against the excessive proliferation of financial services that don’t add value.
What is the mechanism that’s going to eliminate that? Well, it’s the recognition that the industry as a whole may be getting paid far in excess of the value it’s providing. How does that recognition actually begin to remake the industry, and what role will new technologies play in that process? The short answer is that tons of jobs are on the verge of getting wiped out because technology can do those jobs. And there are benefits to scale, so you may not need many firms to replace those that don’t survive.
Forms of platform labor organizing in Jakarta and Bengaluru reflect some of the varied strategies workers in the Global South have adopted to survive and transform their precarious working conditions—low pay, a lack of standard contracts or benefits, physical danger, and threats of violence. In both cities, mobility platform drivers have found ways to develop social support structures, underpinned by mutual aid, while also investing in collective identity and power. Yet, the form these relationships have taken in both cities vary—a reminder of how important context is to understanding or advocating worker collectivization. These strategies signal possibilities for tech workers in increasingly similar precarious conditions around the world.
Basecamp In Jakarta, Mba Mar, a ridehail motorbike driver, spends more time at her driver community’s basecamp than she does at home. In this roadside shelter, constructed over the course of a year by her community of ojol, or mobility platform drivers, she dispenses advice to new drivers, charges her cellphone, catches up on news floating around driver WhatsApp groups, and waits for the mobility platform she works for to match her with the next order. Everyday, as she rides her motorbike around the city in her personalized jacket, embroidered with her community’s emblem, she knows she is not alone. Her fellow ojol “have her back.” Mar’s community is just one of the hundreds of platform driver collectives spread across Jakarta. Each has its own membership rules, ranging from moral expectations (members must be honest) to socializing expectations (members must remain an “active” part of the WhatsApp groups, attend all social events of the community, come to the basecamp at least once a week, and so on). Communities hold internal elections and have mandatory monthly member meetings. Some even have membership fees, which go into a common pool of money used to support community expenses. Most communities have built basecamps where drivers meet between orders, some calling these spaces their “second home.” Many issue ID cards to identify members in case of road accidents, and as a way to solidify their sense of belonging. Collectively, they have set up their own joint emergency response services, and informal insurance-like systems that use community savings to guarantee members small amounts of money in the case of accidents or deaths. They have also provided their members with Covid relief, such as distributing personal protective equipment and free groceries.
Although I do really like Westworld. I was going to ask you about that. It’s like there’s this particular media moment right now—there’s a lot of good television that revolves around these questions, it’s science fiction but it’s increasingly closer to reality, at least in the popular imagination. So like Westworld, or Black Mirror. There was Ex Machina not too long ago. I’m curious what your thoughts are on that.
The rate of progress in AI over the past decade has been astounding. Ten years ago, Go was something that would never be solved by anybody, and now it’s there. That required tremendous leaps forward. And so I think that although the popular imagination is always going to be leaps and bounds ahead of what’s realistic, a lot of that is a reflection of the progress that has in fact been made in the past decade. Whether that’s because the actual technology itself is in the golden age and will soon revert back is a good question.
Despite the staggering number of satellites, the business of capturing satellite imagery is dominated by a small number of major players. DigitalGlobe and Orbital Sciences (by then called GeoEye) merged in 2013; the resulting corporation, Maxar Technologies, became the largest satellite imagery company in the United States—a monopoly, in effect. Planet, founded by ex-NASA scientists in 2010, initiated a new chapter for the industry, launching small-scale micro satellites that could capture imagery of the entire planet at least once a day. The miniature satellites themselves are called “doves,” ironic given their recently renewed contract with the NRO in late 2021. Planet, which went public on the New York Stock Exchange just weeks later in 2021, has been hailed as an industry disruptor for years. Maxar and Planet have emerged as twin giants of the industry: one supplies high resolution, the other, speed.  The granularity of satellite imagery can be divided into three categories: low resolution (over 60m/pixel), medium resolution (10–30m/pixel), and high resolution (30cm–5m/pixel). The precise resolution of NRO satellites remains classified but continues to occupy the highest rung, while public research satellites like Landsat 1 capture the medium to low resolution needed for climate science. Historically, commercial satellites have been restricted to selling imagery up to 50cm/pixels (lowered to 40cm in 2014, then 30cm in 2015) despite their capacity to produce much higher resolution, as is the case with Maxar’s satellites, in particular. While Planet satellites aren’t capable of capturing the same sort of resolution (they max out at 50cm/pixel), the sheer number of micro-satellites they have launched means that Planet has the largest constellation of satellites ever assembled. As of 2022, Planet’s doves can capture and transmit imagery at least once a day, and this “guaranteed collection” business model is, in part, responsible for their recent IPO.  In other words, if Landsat imagery can capture a retreating glacier, Maxar can capture every crack—and Planet can capture its movement day-in, day-out. The NRO satellites? Who knows what they can do.
The GEOINT Singularity Thanks to their size and technological advantages, Maxar and Planet have become the go-to suppliers of satellite imagery used to document everything from tornado damage to the 2021–2022 military actions in Ukraine. According to its own statements, Maxar “provides 90 percent of the foundational geospatial intelligence used by the US government,” and was initially the sole supplier of imagery to the US government. Since 2019, the NRO has subscribed to Planet’s services (a contract that was recently expanded). Meanwhile, other companies like Satellogic (which recently partnered with Palantir in early 2022) and BlackSky (contracted to the NRO and NASA) have emerged with similar capabilities.
TM: We are very reactive when things happen because there are no conversations being held with our community about changes being made. That flow of information seems to not flow to us in the way that it should, which is where I want OBA to be able to step in. I want us to be a hub. A lot of these technology companies, real estate developers, and all kinds of folks who are just making their way over to Brownsville claim that they can’t find anyone to talk to. I’m just like, there’s so many community organizations doing work in Brownsville. How could you not find anyone to talk to and get insights from? So, you decided to still push forward with whatever this project was, without speaking to anyone in the community or trying to have a town hall or anything, despite the fact that we are very open to all of those things?  Why aren’t these companies speaking to the community-based organizations? I want OBA to be a hub where they know they can come here and speak with community members who have expertise in different areas, so that we’re not just like, “Oh, what the heck is that? We never heard of this. What are you doing here?” Instead, it’s like, “Oh no, we had conversations with these people. We let them know that we wanted this and that to happen, and this is what we don’t want to see.” Those conversations don’t happen. It’s always a too little, too late kind of situation for us, and I want to change that.  I want us to be involved in the conversations that are being had about the space that we occupy. Like, we live here. There are people here that have a brain and they have wants and needs, and they want to see different things in their community. When Fabian was speaking about gangs, how kids end up in those situations is that they want to go outside, but we don’t have green fields for them to sit in the grass and look at the sky and just ponder. We don’t have spaces like that. They come outside of their homes into all kinds of confusion. Young people have to navigate through these communities, digesting what they see and that’s how they learn. And it’s not always the greatest thing when they don’t have someone, or an organization, there to explain to them what they’re experiencing, so that they can make better and more informed decisions about how they want to navigate through the community.
Housing and (De)Funding The Police As far as the gang database, who gets categorized as a gang? You mentioned that National Grid is putting poisonous gas under the ground, affecting a lot of people. Nobody is calling them a gang, right? But if you’re fifteen years old, Black, with certain colors on, you’re more likely to be identified as a gang member.  Tranae, you mentioned the density of housing projects in Brownsville. People don’t seem to understand that you have a high density of projects, but you also have middle-class home ownership and residents with white-collar jobs. So, even when we talk about community, people within the community have very different relationships to the intensity of surveillance and policing. I also know from my experience, caseworkers live in Brownsville, so there’s people who sit in a lot of different places and have different relationships to policing. How are you thinking through class differences and funding relationships as you organize in Ocean Hill-Brownsville?  TM: In Ocean Hill, there’s a larger amount of home ownership than there is down in Brownsville, which I’ve found to be one of the dividing factors. This is one of the reasons why OBA started and why we have this name—I wanted to bring the two communities together because we are separated due to the infrastructure of Brownsville and the density of the housing projects. We have the same issues, but the intensity of those issues is greater in Brownsville. Ocean Hill will probably be gentrified way faster than down in Brownsville. It’s happening at the same time, but the changes are happening a lot quicker in Ocean Hill.  I’m not finding that everyone wants to defund the police here. The police have actually been very supportive of OBA, thus far, in our events and organizing outdoors. I think it’s about the people and not just police in general, because we have been met with folks not being happy about our presence outside. For example, business owners have called the police but when the police arrive, they’re actually helpful and they like what they see us doing, which is creating space for imagination and joy, and providing information on these invisible harms: facial recognition, surveillance, and fracked gas pipelines in Brooklyn.
The offline vs. online efficiency question is an interesting one that I am not sure I have an answer to. I’ve always found talks and conferences and such kind of odd because, when you go to them, you are speaking to a few dozen people and maybe a couple hundred at most. But if I deliver the same content online, I can easily reach tens of thousands (and often more) just through my own social media distribution. For instance, the first 3P video was shared by Robert Reich and Bernie Sanders and received over half a million views on Facebook alone. But nonetheless, people see doing offline stuff as the real, impactful work for some reason. I get that when it comes to organizing real people, but I don’t really get it when it comes to spreading ideas throughout the discourse.
The kind of offline work that I suspect is really valuable is paradoxically the stuff that is the most private: meeting with politicians’ staff, the editors of other publications, and that sort of thing. I do more of that than I do big public appearances. We’d love to hear your broader thoughts on the prospects of using the internet to build alternative political institutions that can challenge the prevailing common sense. Do you think the Right or the Left has been more successful so far in capitalizing on the new opportunities created by the internet? The value of the internet is primarily the fact that it has quasi-free distribution. I can reach millions of people for nothing. This means I can also fundraise from millions of people for nothing—or thousands, in my case. The social aspect of it means that people I reach can in turn share it with people they know, also for free. This allows for a level of coordination that wasn’t previously possible outside of huge institutions.
Sextech, by contrast, steers clear of this radical message. Some sextech looks radical, but it essentially rephrases watered-down feminist insights for a general audience, and musters new data in order to teach old-fashioned communication skills. At its best, sextech treats women’s sexuality not as a pathology requiring medication (e.g., the dismal “female Viagra” that hit the American market in 2015), but rather as a product of cultural conditioning and education. But sextech remains deeply individualist—it styles itself as neoliberal self-help rather than as an instrument of social transformation. And its ambitions are modest: OMGYes and other platforms aim at incremental sexual reform versus sexual revolution.
As a counterpoint, it’s useful to consider one of the pioneers of sextech, and sexual revolution: Wilhelm Reich. Reich’s visionary, utopian, and at moments utterly barmy schemes were predicated on Marxist politics. Reich preached the power of sex and libido as a source of “bioenergy” in his work The Function of the Orgasm: Sex-Economic Problems of Biological Energy (1927). His Sexual Revolution (1936) made the case that political-economic formations, whether authoritarian or capitalist, relied on sexual repression to keep people in line. The patriarchal family structure “dammed up” libidinal energy as a means of social control.
If one expands the set of comparisons on police and military stockpiles to other OECD states, America also starts to resemble our unfortunate next-door neighbor Mexico. There, civilian rates of legal gun ownership are quite low—but Mexico is also where ongoing violence between drug cartels and security forces generates a yearly body count on par with what one would expect from an outright civil war.
But then there are our other peers, in Canada, Western Europe, and the Scandinavian countries. Much beloved as go-to benchmarks of stability and low gun crime for liberal American pundits, these states also have surprisingly large quantities of guns, civilian and otherwise. What deeper structure produces this strange set of statistical bedfellows? The answer, simply put, is capitalism in general, and the arms trade in particular.
As a way to figure out what writers and artists would actually want to use, we decided to create a magazine called COMPOST. It’s an initial use-case for the tool—the magazine is a lab for Distributed Press in a way. We decided to make the project even more meta, and make the magazine itself about the digital commons, while Distributed Press was about doing the organizing and technical work of building shared, free and open source digital tools.  We paid all the contributors for their creative pieces, and we also compensated them for contributing their feedback and ideas about how Distributed Press and COMPOST should work. We felt that building tools for artists and writers to use on the DWeb necessitated having their active input. Our goal is to continue giving artists agency over how this tool evolves.  Distributed Press is not yet a cooperative. Udit Vira, Benedict and I are the core contributors, and we were very explicit about the fact that to move this project forward we had to be able to make a bunch of the initial decisions. Having too many cooks in the kitchen can gum up the process of shipping something out. But we had a series of meetings with the contributors, took their input, and tried to implement their ideas as best we could. We’re hoping that with each issue we will have different cohorts of writers and creators that will shape not just the magazine issues, but also Distributed Press itself.
There are different ways that we see Distributed Press as a budding digital commons project. It’s a free and open source tool, and we’re also contributing to these distributed protocols and projects upstream—as part of this work, Ben filed tickets with IPFS, Beaker Browser, and Hypercore. When he’d come across things that didn’t work, he’d do the work of flagging issues and suggesting ways to fix them.
On November 8, 2018, a live 115 kV line broke off from a transmission tower owned by the Pacific Gas & Electric Company (PG&E) in Northern California. The tower was ninety-nine years old—twenty-five years past the expiration of its planned operating lifetime. The wire hit some vegetation and set it alight. The fire rapidly intensified, with ample fuel from dead trees that had been killed off during the recent historic drought. Before a proper evacuation could be organized, the fire roared through the nearby town of Paradise and destroyed it completely. This was the deadliest wildfire in California history. Eighty-five people were killed and 14,000 homes were lost.
The destruction of Paradise was the result of interlocking trends within capitalism, technology, and ecology—as are other recent catastrophes of power infrastructure, such as the February 2021 outage in Texas that had me revising this essay in the cold and dark. The transmission and distribution grid is arguably the very foundation of modern society. It harnesses, stabilizes, and distributes energy in its most fundamental form: electricity. This is a delicate, complex, and dangerous task that is crucial for public welfare. And yet, across much of the US, the grid is controlled by investor-owned utilities (IOUs) like PG&E: corporations whose core purpose is the maximization of private profit.  As regulated monopolies, IOUs do face certain constraints on how they can go about making money. Nonetheless, the logic of profit maximization generally holds: minimize costs associated with labor, maintenance, and operations, and keep electric rates as high as possible, in order to maximize returns to investors. One direct consequence is that infrastructure is allowed to fall into disrepair. Investigations into PG&E after the devastation at Paradise revealed a systemic tendency to run equipment to the point of failure, rather than invest in inspections and preventative maintenance. In fact, many of the most destructive fires in recent California history have been caused by failing PG&E equipment, which supplied the literal sparks to set fire to the dead vegetation that is accumulating across the state due to historic, carbon-exacerbated droughts. Meanwhile, PG&E shareholders have reaped the benefits. In a 2019 court case, a judge pointed out that the company has paid out billions in dividends over the years while neglecting its maintenance and land management responsibilities.  What if the grid were owned in a different manner—say, by the same rural communities that have suffered so much at the hands of PG&E? What if serving the needs of these communities, rather than enriching investors, was the purpose of a power utility?
That seems like it gives you the opportunity to curate the conversation around “what we are talking about when we talk about the distributed web.” The DWeb Principles came out of that vacuum of political agnosticism, to identify what it is that we are actually saying when we say we want to “decentralize” the web. We were very explicit that it’s not just another new set of principles that ignore or replace other ones. We cite many other principles, like Association for Progressive Communications’ Feminist Principles of the Internet and the Design Justice Principles. Our principles emphasize that this particular community is concerned with design issues like interoperability, being free and open source, having repairable devices—we tried to really name the things that this community cared about. And having put that first stake in the ground, I hope that will shape the dialogue around what it is that we actually mean when we talk about decentralization and how some of these technologies are being built.  Declaring those principles feels like an opportunity to move the center of the conversation for the people already involved, and supply a different set of baseline assumptions for people newly being introduced to the concept of the decentralized web.  Yeah, exactly. The act of creating these principles is also in and of itself a tool for people within the space to have a dialogue with each other. It’s similar to the kind of inter-organizational statements I used to coordinate as a digital policy organizer. The process of creating them involves a ton of back and forth, and it’s this back and forth that’s so valuable. Creating an environment where people can be in dialogue with each other to assert their values, listen to others, and negotiate the desires of their shared dreams—that’s a powerful tool of organizing! The hundreds of edits across the draft versions are artifacts of this process. One of our goals with the principles was to bring the community of people interested in decentralized technologies a bit closer together. Our other goal was to have a document we can use to introduce people to this space and succinctly describe what we’re about.
Putting the principles into practice requires recognizing our own needs and addressing them in creative ways in solidarity with others. That is what centralized top-down systems cannot do—create locally-situated networks of communication that emerge out of the imagination of people who live in their unique economic, social, cultural contexts. If decentralized technologies are designed with an assumed universality—that there can be one blueprint for organizing communication across diverse knowledge systems and language cultures—they’ll become just as problematic as Facebook.  The act of creating the principles is useful in and of itself, as an act of community organizing beyond whatever the output is. Even if the output was to disappear, the connections between the people and the organizations will still exist.  Totally! Investors and people in Silicon Valley go to parties together where all this soft power, informal trust-building happens. And it sucks to some degree, but we’re humans and that ability to talk to each other and establish personal trust is really, really important. You need to have new ways of creating alternatives to high-end galas and backchannel investor conversations, and have other ways for people who are trying to build alternatives to talk to each other and build trust.  What Works in Bangalore A lot of the projects that get airtime in conversations around decentralized and distributed technologies seem to be centered in the Global North. What are some ways in which your activism has connected these new kinds of technologies to communities in the Global South?  Before DWeb Camp in 2019, we connected with the Association for Progressive Communications (APC), an amazing organization that is essentially a coalition group and advocacy organization that connects progressive and feminist tech-based organizations from around the world. [Ed: DWeb Camp was a four-day retreat in Pescadero, CA, organized in association with the Internet Archive.] Through that connection, we invited ten global fellows from APC’s Community Networks program to the camp. They came from India, South Africa, Brazil, and elsewhere, to facilitate knowledge exchanges about internet infrastructure, open hardware, and community networks.
Another internet phenomenon that fascinates me is live cams: these websites where you can go and watch couples having sex in a very exhibitionist way. You can see the same couple day after day after day and get a sense of them as human beings, of the narratives of their lives. That’s really fascinating in the way that it suggests the possibility of new kinds of intimacy.
Really, our whole sense of intimacy has been utterly transformed. And that’s true even without porn, when you think about how porous our privacy has become because of social media, and how much of our private lives we share. It’s fascinating. And it’s scary. And yeah, I guess I feel as ambivalent about it as I do about everything else. The internet has produced a multiplication of possibilities for pleasure—but also a multiplication of possibilities for the abdication of inwardness and solitude and meditativeness.
Curiously, however, the origins of these very same AI systems now installed in mail processing plants—known as “deep” neural networks—can be traced back to the USPS. In the late 1980s, a young researcher at AT&T named Yann LeCun began to experiment with neural networks using a dataset provided by the Postal Service. It contained about 9,000 images of individual digits, culled from handwritten ZIP codes. To this day, a modified version of the dataset is ubiquitous in computer science curriculums, serving as a benchmark for handwriting recognition systems. LeCun, now the head of AI at Facebook, expressed his gratitude to the Postal Service in an early published technical paper, citing the “very hard tasks” performed by USPS’s engineering department in preparing this dataset for use by AT&T researchers.
Just a few years before LeCun was given access to the formative ZIP code dataset, the USPS had been forced to acquiesce to AT&T on another front. Despite message volume increasing by a factor of ten in its first two years, the USPS’s ambitious E-COM program—a proto-email system—was discontinued after pushback from the private telecommunications industry. AT&T led the charge, claiming that it was unfair that the “Post Office is being encouraged to provide a kind of service” that “private industry is able to do.” That the Post Office could afford to invest in innovative and promising new technology, even when it was unprofitable, was an outrageous notion. Reagan’s Federal Communications Commission agreed, bringing to an untimely demise what may well have amounted to a publicly owned, state-of-the-art digital communications infrastructure.
What is a termed position? And what are the other positions available in government, whether salary or contract? The federal government has four positions that I’m aware of. You have general service (GS) employees, who are salaried employees who work for the federal government. They’re somewhere between GS-1 and GS-15, which is a salary scale, and they are competitive service positions. Essentially, that means they have the ability to apply to any position within government, compete for it, and get it as a long-term position.
Then there are people like me, termed service. To make my position valuable, they let me slide into GS-15 step 8, which is two steps below the maximum salary of GS-15 step 10. They try to match our salaries from the private sector. They couldn’t—they could only match two thirds of what you make in fintech in New York. But they tried, which I appreciate. I was completely aware that I was going to take a salary hit by switching.
That’s what I think makes the GSA’s Centers of Excellence (CoE) so special; because the CoE is a part of the federal acquisition service, we can cut that time down to get a new system. But there is value in people like me, people who are angry at the way government works, sticking around for five years, for ten years to see these things through.
Right now, government is shortcutting the hiring process with termed employees like me who have short two-year stints to try to make change. I don’t think that’s effective because you need someone to stay and see it through. Especially for an organization like the federal government where, every four years, regardless of what happens from the mission perspective, leadership goes away. Targets change. To create sustainable change, we need angry people in the same chair for more than two years.
Such a movement will require a broad, organized assault on the neoliberal political apparatus. Had I the recipe for such a program, I would gladly share it. However, I am confident that our economy can be, if not remade, at least equalized, and the bootstrapping cycle broken by the people within these institutions.  This will require a split within the professional-managerial class. So-called “helping professionals” must break with the tech workers with whom they shared college dorms and now share apartment buildings. Instead, they must organize at the point of reproduction in solidarity with their patrons and students, based on the recognition that both helper and helped share an interest in preserving the institutions of public service threatened by the access doctrine.
Strategically, both sides need each other. The helping professionals are few in number, and state and capital can easily paint their protests as a dereliction of sacred duties (e.g., “teacher strikes hurt kids”). They need support from the communities they serve in order to stand up to this political pressure. On the other hand, students, patrons, and their communities have numbers, but lack the strategic position within the mode of reproduction that helpers’ work provides. A protest in support of library patrons is one thing, but a library shut down by its workers is another thing entirely—revealing just how much the city relies on those institutions to function and the sort of power those professionals have. That work is already happening across the country. The Chicago Teachers Union led the way in 2012 (and again in 2019), organizing teachers and parents alongside workers. Recent teacher strikes across the country have used the same tactics to join struggles against school privatization and school policing with teachers’ fights for better jobs and better pay.
So I'm still thinking about labor and open source and access in tech, but there are some life or death issues in front of us that software is not going to solve. The technology industry has landed at this point where it is not separable from prisons, policing, and surveillance. If companies are there to make the profits, do the launch, and get the return on investment, they are going to be doing this work. Look at policing as a budget line item in our cities, even small cities. Look at the military contracts, the intelligence contracts. That is where the money is. Companies say, “Oh, well, we don't work with ICE; we only work with DHS!” Or “We work with Raytheon, but not on the knife missiles!”  We have to get rid of ICE. If we don't want Palantir helping DHS identify immigrants to put in cages, we have to get rid of Palantir and DHS. We have to abolish these systems and, when we do, the tech industry will have to find a way to make money some other way.
Amazon is a huge and complex organization. How should we think about it as a whole? Amazon is an opportunistic corporation. It invests in businesses where we think we have a competitive advantage. In general, Amazon thinks of itself as a technology company. So we put the technology first, whatever the product is that we’re selling. And we believe that because we have so much talent and so much capital, we should be able to use our technology advantage to dominate any market that we decide to enter.  What were its origins? Why did Amazon start off as a company that sold books on the internet? In the mid-1990s, the internet was widely seen as a replacement for the library—the library 2.0—so figuring out how to buy books on the internet felt like a natural next step. A little later on, you could see some of that same spirit living at Google through the Google Books project, which was an enormous undertaking. They put a hugely disproportionate amount of resources into it. Amazon’s ultimate goal was similar to that of Google Books: to digitize all of the information in the world’s books and make it available universally, because that was the promise of the internet.  Jeff Bezos studies other “great men” in history and imagines himself to be a kind of Alexander the Great. There's even a building on the Amazon campus called Alexandria, which was the name of one of the company’s early projects to get every single book that had ever been published to be listed on Amazon.
Doing so will require acknowledging how the material substrate of our lives is intimately, and often violently, connected to ecosystems and people beyond our borders. Trade, production, and consumption could, in theory, be reorganized to prioritize climate safety, socio-economic equality, Indigenous rights, and the integrity of habitats.  Yet achieving such an outcome will take political power, strategically deployed. Amid the overwhelming complexity of contemporary capitalism, it’s easy to forget that supply chains are not the product of geographic destiny. Indeed, a key aspect of environmental injustice is that contaminating processes — mines, power plants, or factories — are sited where ecosystems and human lives are seen as disposable or deemed to lack political influence.  The corollary is that force from below can obstruct and even reshape global flows. This force is particularly effective when exercised at “chokepoints”: points of obligatory passage for people and goods. In addition to the factory floor itself, the infrastructure of logistics (ports, ships, warehouses) and the sites of extraction (mines, rigs, refineries) are potential bottlenecks, and thus nodes of vulnerability for the system as a whole. In other words, they are strategic sites for disruption.  I might not know the exact shape of the world I want. The present weighs heavily and makes imagination difficult. But I know it starts with relating to this planet’s bounty as mysterious, vital, and nourishing; envisioning abundance as shared flourishing; and broadening our solidarities to encompass people we may never meet and places we may never visit but whose futures are bound up with our own. The salar will thank us.
Let’s begin at the beginning. Can you tell us a little bit about your childhood?  I grew up in Denver, in the kind of white, middle-class neighborhood where people had gotten mortgages to build housing after the war. My father was a sportswriter. When I was eleven or twelve years old, I probably saw seventy baseball games a year. I learned to score as I learned to read.
Still, the mining rigs in the Collins household run most hours of the day. And so does Cassie’s trading. “If something comes up, I’ll drop everything to look at the chart… I can think of times that I’ve been pulled out of bed at three in the morning when my husband has seen something on my phone,” Cassie says. Even though Owen and Cassie get up early, go to bed late, and don’t take weekends, they seem to thrive off the work. “We’re the kind of people that can’t take a vacation,” Cassie says. “We got married young, we didn’t have a honeymoon, we had kids, we’ve never taken a break.” A Fork in the Road There is one way to get a refund for stolen crypto: every miner can agree to undo the transaction. In other words, if a blockchain is just a list of transactions that a group agrees upon, then a coordinated effort can be made to get every individual in that group to remove the theft transaction from their list. This involves what is called a “hard fork,” and it is a massive undertaking, both technically and politically. It is also extremely rare.  Only an unmitigated catastrophe can prompt such a response. In the summer of 2016, this is exactly what Ethereum was headed for. The first major application for the Ethereum network was about to be released: the Decentralized Autonomous Organization, or DAO (pronounced “dow”). DAO was designed to be a democratic venture capital fund. Anyone could purchase shares using Ether, and then be able to vote on projects for the fund to invest in.  Buzz about the DAO reminded Cassie and Owen to check on the Ether they had bought a while back. When they did, they found it had grown close to seventy times in value. It was more money than they had ever had in their life together. Swept up in DAO euphoria, they were ready to make another big investment. Or, as Owen puts it, “Like some dumbass newbies, we took our Ethereum, one hundred percent, and put it in the DAO.” They were hardly alone: the DAO was the largest crowdsale ever up to that point, raising over 10 million Ether — worth $200 million at the time.
Then disaster struck. Within a month of DAO’s launch, a hacker found a hole in the code and began to drain funds out at a rate of $4,000 worth of Ether every three or four minutes. The attack ran into its own bug six hours after it began, but the hole was still exposed. The Ethereum community was divided over how to respond. Some DAO victims begged the developers to get them their money back by doing a “hard fork” and reverting the attack. However, a group of ideological purists argued that a “hard fork” would erode Ethereum’s promise of running code that no authority could interfere with. The Ethereum developers were too chummy with the DAO developers, they insisted. And, by leading an effort to clean up the mess, they were undermining the decentralization that made Ethereum so appealing in the first place.
2/ Viral, we call content that spreads quickly by means of preexisting bodies and behaviors. A virus takes our tendencies to make new cells, or touch one another, or share a laugh, and turns them to its own sole purpose: self-propagation. What makes something catch on is a subject of much speculation. What is clear is that some dangers cannot be confronted without considering the ways that the entire system they imperil works.  This issue will explore forms of security and insecurity that arise as digital technologies enter new realms of existence—and how stubbornly these two terms intertwine.
In one sense, the dynamics that it explores are not new. The rise of capitalism in the fifteenth and sixteenth centuries provided some insurance against the whims of, say, the weather. Labor markets promised to free workers from physical coercion: rather than toiling on the particular piece of land where you happened to have been born, or where some lord put a sword to your head and told you, Toil!, you could choose where to go work where you wanted, for a wage.  And yet, in order to get most people to work for wages at all, those in power had to produce widespread insecurity. By turning life’s necessities into commodities that had to be bought and sold on a market, they made it impossible for most people to survive any other way. Digital technologies have only intensified this dynamic. To access information—a matter of survival in a pandemic—we rely on the platforms. If you want to get information about what the pandemic means for your kid’s school, you will likely have to join Facebook.  The current crisis has brought us back to these founding facts of capitalism as a world system. The market mediates our every move, and binds us, through countless threads of interdependence, to everyone else. Nothing like shortages of toilet paper to highlight how few of us would be capable of single-handedly reproducing the conditions of our lives.
This Victorian Jekyll-and-Hyde model of desire and sexuality informs the recent bestseller Everybody Lies by Seth Stephens-Davidowitz. Stephens-Davidowitz, who holds a PhD in economics and worked briefly at Google as a data scientist, claims that for his sections on pornography he was given access to “the Pornhub data,” though his method and data sets are unclear from the endnotes or accompanying website. His argument is simple: people’s public personas are deceptions that conceal a secret, more “truthful” self obsessed with taboos, kinks, and unconventional sexual desires.
The many pieces of clickbait that claim to illuminate these desires by drawing on Pornhub Insights are often ridiculous. “While interest may be hot and heavy,” Mashable wrote in 2017 about the supposed rise in searches for fidget spinner porn, “what you’ll actually find on Pornhub is pretty hilarious. It’s just a lot of video of fidget spinners…spinning. […] For instance, a video called ‘1000MPH Fidget Spinner Bisexual Threesome’ literally features three spinners ramming into each other with excellent narration.” More often than not, Pornhub Insights press releases get spun into stories about broad trends in human sexuality. On August 2017, for example, Insights published a post called “Boobs: Sizing Up the Searches” looking at data about “the most popular breast related searches.” The post revealed that “Pornhub visitors between the ages of 18 to 24 are 19% less likely to search for breasts when compared to all other age groups.” A Maxim writer then turned that single line in the report into the headline “Millennials Aren’t All That Interested in Breasts, According to Pretty Depressing New Study.” Playboy’s popular Twitter account received almost 10K likes, 2.8K retweets, and 2.4K comments for an 100% on-brand tweet that read “Millennials aren’t as interested in breasts as older generations. Why?” In fact, if you search “millennial” and “breasts,” Google will return countless hits about this supposedly data-based “fact,” all clustered around August 2017 and all sourced from a couple of quotes from that one Pornhub Insights report.
RECs are also moving beyond electricity, making use of their existing network of electric poles to build out fiberoptic networks and connect their rural members to high-speed internet. Perhaps even more ambitiously, some co-ops are stepping into the world of ecology. Roanoke EC, in eastern North Carolina, is running a program to distribute knowledge about sustainable forestry, targeted specifically toward Black landowners and embedded in a framework of racial justice. This initiative is rooted in a longer tradition stemming back to the 1970s, when Black co-op members fought for the administration and its policies to reflect the demographics and interests of the territory’s majority-Black population. So far, Roanoke EC has helped implement land conservation techniques on over 13,000 acres of land.
An electric co-op entering the world of land management is not quite as random as it may seem. Power transmission and distribution systems require extensive land management in order to ensure that trees and foliage do not interfere with electrical lines and poles, and vice versa. And, as the wildfires in California have shown, land mismanagement by utilities can have fatal consequences, which will only grow in size and scope as the climate crisis escalates. Countering the crisis will require a massive expansion of conservation efforts, and these efforts are unlikely to come from entities that prioritize profits above all else. Rather, the best stewards of the earth are those accountable to a mass constituency, rooted in local experiences and knowledge, and committed to the creation of public, not private, wealth.
But panics over new firearms technologies are older than plastic guns or action movies. So are grandiose techno-futurist claims. The American-born Hiram Maxim, inventor of the first real machine gun, confidently predicted that his creation would actually “make war impossible,” rather than producing more lethal conflicts. Never mind that, in more unguarded moments, Maxim would admit that his inspiration to enter the arms industry had come from a businessman friend who had told him, “’Hang your chemistry and electricity! If you want to make a pile of money, invent something that will enable these Europeans to cut each others’ throats with greater facility.” In full philosopher-salesman-prophet mode, Maxim insisted that his fearsome weapon would, through a kind of logic of mutually assured destruction avant la lettre, leave nations too terrified of mass casualties to ever actually go to war. Needless to say, a brutal century-and-a-half later, Maxim’s sales pitch seems either laughably naive or contemptibly cynical.
Evaluating the prophecies of gun futurists, then, the novelty (or lack thereof) of their inventions seems less important than the question of what problems, exactly, they claim to solve. And, by the same token, our collective fascination with gun futurism—our reactions, variously hopeful or hysterical, utopian or bleak—are more interesting when seen in light of what we don’t find interesting.
But if cellular agriculture is going to improve on the system it is displacing, then the critics are right: it needs to grow in a way that doesn’t externalize the real costs of production onto workers, consumers, and the environment. There are serious questions about whether production can scale up safely and affordably, and some cellular agriculture practices need to be cast aside. For instance, many companies’ current production techniques, including the ones Eat Just used for its nuggets, use fetal bovine serum as a cell growth medium, which is harvested from the blood of cow fetuses during slaughter. But, now that we have several proofs of concept for cell meat, scale may be as much a social and political question as a purely technical one.
While some cellular agriculture research is being carried out at public universities with support from NGOs, most research and development is being done privately. Substantial capital is needed for research, development, and commercialization. But that the private sector sees potential in a technology that governments have mostly ignored is fundamentally a political problem. What we need are public institutions that can both nurture cellular agriculture and rein it in with public investment, regulation, and licensing. It is perfectly plausible that private firms flush with venture capital will find ways to scale and sharply reduce the costs of cultured meat. But they will almost inevitably do so by structuring their research programs and supply chains to maximize investor value, rather than social welfare.
Machine-breaking is often a good idea; for more ideas, we can turn to other movements. Tech workers are taking collective action against contracts with the Pentagon and ICE, and demanding an end to gendered discrimination and harassment. Gig workers for platforms like Uber are organizing for better wages, benefits, and working conditions. Within these movements we can find more useful materials to think with, materials that might disclose the contours of a society organized along different lines.  The intellectual is not the only one who thinks. Masses of people in motion also think. And it is the thinking of these two together, in the creativity that results from their continuous interaction, that furnishes the form and content of anything worth calling socialism. This process is messy and circuitous, with many blind alleys and false starts. It involves more time spent moving contradictions around, and creating new ones, than resolving them. But it is the only path to a future where capital’s motion finally grinds to a halt, and a different set of considerations — human need, a habitable planet — comes to coordinate our common life. This is how the Left will answer the question of what is to be done, about tech and about everything else: by thinking en masse and thinking in motion, while traversing difficult terrain.
“It’s kinda like the Parthenon now, it’s a testament to something…” —Nick Bongiorno, former IBM temp worker and activist  The IBM country club on a hill overlooking Endicott, New York has been empty for thirteen years. Now beyond repair, it was once abuzz with the activity of some 14,000 IBMers and their families. There were basketball games, swimming pools, a bar, a stage, banquet halls, guest rooms, and a golf course, all open to the thousands of IBM employees in Endicott. This was the town where, in 1911, International Business Machines was born.  IBM’s Plant Number One manufactured punch-card tabulators in downtown Endicott. Then came typewriters, printers, and the System/360 computer, after which a parade of ever-newer models were made. For decades, IBM dominated the computer industry. It was not until 1996 that their market value was surpassed by Microsoft. They have since fallen far down the ladder: these days, IBM is only the ninth-largest tech company in the world. They have gotten out of the messy business of making things and are now, primarily, a software and services company. In 2002, after more than ninety years, IBM ceased manufacturing in Endicott.
JF: Yeah, with physical magazine distribution there’s usually a 50-50 split with the distributor, and then it’s expected that they’ll sell about 50% of those, meaning you usually expect to make 25% of cover price at best as the producer. It’s standard for it to be a quasi-consignment set up with the distributor where they don’t pay for stock upfront. They are charged only for what is sold, then whatever is not sold is just destroyed, they’re not sent back to you. You’re paid only when the magazine comes off the shelf. So for us, publishing three times a year, it’s nearly a year after we finish an issue that we get paid.
CH: Things are even more complicated for magazines, since the previous issue basically deadstock once the next issue arrives. They don’t have the shelf life of books in the eyes of distributors. Anne Trubek who runs Belt Publishing writes a lot about the economics of small publishers, including working with distributors and the challenges of unknowable delayed income and remainder practices. It was totally eye opening for me. JF: I think we always just assumed physical distribution would mostly be a way of advertising the magazine. We didn’t even really try it for our first issue–our initial distribution was just me carrying a box of books over to Peter Maravelis at City Lights. CH: We love bookstores and we love the community and depth of expertise that they cultivate. City Lights has always been a huge supporter. We’ve done amazing events at Green Apple in the Park. But for us, it was clear selling in bookstores alone wasn’t a way to financially sustain the magazine.
Today, a growing number of people are thinking about how to regulate Big Tech. Think tanks are studying the complex interactions between social and technical systems, and proposing ways to make them fairer and more accountable. Journalists are investigating how platforms and algorithms actually function, and what’s at stake for people’s lives.
But while all of this has been crucial in refocusing the public conversation, it needs to go further. We won’t fix Big Tech with better public policy alone. We also need better language. We need new metaphors, a new discourse, a new set of symbols that illuminate how these companies work, how they are rewiring our world, and how we as a democracy can respond.
To top it off, the interface is addicting: cartoon-like icons on a red background; daily lotteries and games through which shoppers can win further discounts; and the ability to share your finds and recruit friends to buy together via WeChat, China’s biggest social media app. These attractions helped make PDD the youngest-ever Chinese startup to be listed on the NASDAQ stock exchange. At its IPO in July 2018, PDD raised $1.6 billion. Yet for all its success, PDD has encountered constant trouble from the Chinese government. Why? The prevalence of counterfeit goods on the platform. Shortly after PDD’s IPO, the State Administration for Market Regulation called for an investigation of sales of counterfeit products on the app. Social media users shared images of “Shrap” TVs and “Phelips” razors listed for sale, while an anonymous blogger wrote a widely read post accusing PDD of “set[ting] China back in its “hard and bitter trade war…one of the core objectives of which is to showcase how far we have come in intellectual property awareness.” More recently, in March 2018, a rumor circulated on Chinese social media that in 2017 alone, WeChat blocked links sent from PDD more than 1000 times, presumably in response to government pressure. The PDD controversy isn’t just about fake goods, however. There is a deeper dynamic at play. By using platforms like PDD, rural residents are staking a claim to China’s digital sphere. The Chinese internet has long catered to the urban middle classes. To Wu Changchang, a professor of communications at East China Normal University, the “invisible minority” of rural users “didn’t have any discursive power before.” Now, rural users are making themselves visible to the rest of the country in a radically new way—and provoking a confrontation with the state in the process.
Paper Towels and Tasty Fruits The countryside has immense importance to the Chinese Communist Party. Urban population only surpassed the rural population for the first time in 2011, and over forty percent of Chinese citizens still live outside cities. Economic development—and in particular, bringing an estimated 1.3 million people in rural areas out of poverty each year—has long been the basis of the party’s legitimacy. With rural incomes rising at an estimated 10 percent each year, compared with 5 to 6 percent in the cities, the countryside is also expected to boost China’s growth and consumption figures in the next decade.
But one of the reasons that the Tea Party came to power was that they organized—they built institutions. So the challenge for those of us who want a different world is not to simply trust that the expressive variety that the internet permits is the key to freedom. Rather, we need to seek a kind of freedom that involves people not like us, that builds institutions that support people not like us—not just ones that help gratify our desires to find new partners or build better micro-worlds.
The New Communalists believed that the micro-world was where politics happened. If we could just build a better micro-world, we could live by example to create a better world for the whole. I think that’s wrong. Our challenge is to build a world that takes responsibility for people not like ourselves. And it’s a challenge we won’t meet by enhancing our expressive abilities, or improving the technologies of expressive connection.
My assemblywoman responded to my letter, and shared it with the commissioner of the city’s Department of Investigation. An inquiry was initiated; I heard through the grapevine that the agency had to go through a full audit. As for the allegation of child abuse, I eventually beat the case and received a letter from the New York Statewide Central Register of Child Abuse and Maltreatment saying that the allegation was unsubstantiated. There’s no innocence in family policing—only “We could prove you’re abusive,” or “We couldn’t prove you’re abusive.” Meanwhile, even sealed unsubstantiated allegations remain on file, to be re-opened whenever there’s a new investigation.
I was financially poor but professionally fortunate to have connections who put me in touch with various family defense clinics in the city to ask for advice about my case. Not a single one was surprised about the false allegations. What they were uniformly shocked about was that the kids hadn’t been snatched up. While what happened to us might seem shocking to middle-class readers, for family policing it is the weather. (Black theorist Christina Sharpe describes antiblackness as climate.) The only aberration of my particular circumstances relative to the everyday operations of the family policing apparatus was that we seemed to elude destruction. Over the next two years, I was able to finalize the adoptions for all of the kids. But I never forgot that the only reason I didn’t lose my family was because I had the resources to make the lives of the people working at the foster care agency a living hell. How many teenage mothers—who were every bit as innocent as me—had they deployed this tactic against and succeeded? The Digital Poorhouse Every aspect of interacting with the various institutions that monitored and managed my kids—ACS, the foster care agency, Medicaid clinics—produced new data streams. Diagnoses, whether an appointment was rescheduled, notes on the kids’ appearance and behavior, and my perceived compliance with the clinician’s directives were gathered and circulated through a series of state and municipal data warehouses. And this data was being used as input by machine learning models automating service allocation or claiming to predict the likelihood of child abuse. But how interactions with government services are narrated into data categories is inherently subjective as well as contingent on which groups of people are driven to access social services through government networks of bureaucratic control and surveillance. Documentation and data collection was not something that existed outside of analog, obscene forms of violence, like having your kids torn away. Rather, it’s deeply tied to real-life harm.
Former McDonald’s CEO Ed Rensi got plenty of press attention a few years later with similar comments. “It’s not just going to be in the fast food business,” Rensi said. “If you can’t get people a reasonable wage, you’re going to get machines to do the work… And the more you push this it’ll just happen faster.” Employers, he continued, should actually be allowed to pay certain groups—high school kids, entry-level workers—even less than the meager amount they currently get thanks to the floor set by federal minimum wage law.
Soon after making these remarks, Rensi provided gloating commentary for Forbes.com that his warnings about automation had already proven true. “Thanks To ‘Fight For $15’ Minimum Wage, McDonald’s Unveils Job-Replacing Self-Service Kiosks Nationwide,” boasted the headline. Rensi could barely contain his glee—though he did gamely try to shed a few crocodile tears for the burger behemoth’s now-redundant corps of line workers. “Earlier this month, McDonald’s announced the nationwide roll-out of touchscreen self-service kiosks,” Rensi wrote. “In a video the company released to showcase the new customer experience, it’s striking to see employees who once would have managed a cash register now reduced to monitoring a customer’s choices at an iPad-style kiosk.” In reality, what is actually striking when you watch that video is not the cybernetic futurism but rather just how un-automated the scene is. Work has not disappeared from the restaurant floor, but the person doing the work has changed. Instead of an employee inputting orders dictated by the customer, customers now do it themselves for free, while young, friendly-looking employees hover nearby and deliver meals to tables.
Beyond answering phones I assume you’re also monitoring security cameras. How much of the job is that? People always assume someone is watching a camera at all times. I can 100 percent say no one is watching a camera at any time. We might have some up for peripherally keeping an eye on trouble spots. For the most part, though, the cameras are recording at all times, but no one is ever watching them.  Once you get past a certain number of cameras, you are past the point where it makes any sense for someone to be watching all of them; it’s 100 percent looking back retroactively and seeing what exactly happened.  On average, at my current manufacturing company, I do maybe three to four of those investigations a day. And the previous tech companies I worked at, it was maybe three to four a week depending on the scenario. It was kind of feast or famine with tech. With manufacturing, they care a lot more, because they’re keeping an eye on when people screw up, when things break, or when damage is done — because every penny counts.
So in manufacturing, beyond security and managing the infrastructure or assets of the tech company, it sounds like there is also a level of surveillance over the work that people are doing?  In a way. They have cameras on the production itself. It’s more a matter of making sure that if there is damage done, the person who does it is held accountable, and it’s dealt with in a timely fashion. Because they need to investigate — was it just an accident? Was it malicious? That sort of thing. If it is malicious, they deal with it rapidly; if it’s just an accident, the safety team is there to determine how it happened and how to prevent it in the future.
We have to be in community. We have to be in conversation. And we also have to recognize what our piece of the puzzle is ours to work on. While it is true, yes, we’re just individual people, together we’re a lot of people and we can shift the zeitgeist and make the immorality of what the tech sector is doing—through all its supply chains around the world—more legible. It’s our responsibility to do that as best we can.  MW: Yeah, I agree. I’m a white lady raised in LA. I had to educate myself on so much that I didn’t understand, and that process is humbling and ongoing.  My voice doesn’t need to be the center of every conversation. But, okay, if I have a little power and a little standing maybe I can move capital, maybe I can ask people what they need and see what I can do to get it to them, to support and nurture their expertise and organizing and approaches, which may be completely unfamiliar to me, and may not need any advice or insight from me. I’m thinking of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). Briefly, it’s a computer-science focused conference exploring fairness in algorithms. Over the years, we have seen increasing calls to examine algorithmic and other technologies in the context of racial capitalism and structural inequality, accompanied by warnings about the insufficiency of narrow FAccT-style technical approaches to the problems of algorithms and tech. So, what was the response? From many people, it wasn’t a re-evaluation of the field, but instead a move to absorb. Like, “Oh, well, how about we bolt an Audre Lorde quote to this computational social science paper.” This response continues to place computer science at the center, with racial justice as seasoning. Even though there are, of course, Black feminist conferences that could use some funding, and that have been deep in these topics for decades before FAccT. So my question is, why is the instinct always to absorb into the core instead of diffuse the resources to those already doing the work?
I mean, I fuck with that. We need allyship in the form of funneling actual, material support out of these Western institutions. In July 2019, one of us, Khalid Alexander, received a tip from a fellow San Diego community organizer. “You should be paying attention to the city’s new streetlights.” The message continued, “Apparently, they have cameras attached to them.” Alexander lived in one of the many predominantly Black and brown neighborhoods in San Diego that was under constant police surveillance, including by “gang suppression units” that watch, harass, and document residents. He feared that streetlights with cameras on them could supercharge these efforts.
Like every other piece of legacy software, when you work with it long enough it shapes the organization around it. Workflows are created to handle the limitations and people like me start building bits and pieces on top of it to make it better so that the workflow can become better. And what you end up with is a whole bunch of legacy code held together with duct tape.
How did you feel about your job at the time? Initially, I was very grateful because I wasn’t driving a truck. Being the clever guy who can do these reports when everybody else gets stuck was exciting for a while. But then I got tired of doing the same thing over and over again. You just have to copy and paste, change the dates. It gets very boring.
Tindering Gitmo If Guantánamo is more than a physical detention camp, if it is also a network of people and ideologies that have successfully implemented the continuous extrajudicial detention of individuals, then how can researchers, reporters, and future generations trace its contours online, and formulate questions about what justice with regard to Guantánamo might look like? In 2015, as a master’s student in comparative literature, I emailed the Joint Task Force Guantánamo, requesting to see the prison’s library. I was informed that only reporters could go, so I began a foray into freelance journalism. To go would be to see, and to see would be to understand, I told myself. Among other things, I set out to learn how the arrival of T-Mobile cell service had changed life on the naval base. I hoped I could convince US civilians that Guantánamo was not so far away—what the Bush administration had described as “the legal equivalent of outer space” was, in fact, connected by multiple fiber optic cables to the state of Florida.  As I interviewed guards stationed at the base, I discovered many of them were millennials like me. They were mostly twenty-somethings, some actually younger, many of whom scuba-dived on the weekends, acting as if warehousing Muslim men was part of their patriotic duty. At the same time, I could not shake the feeling that everything I saw in the detention facilities, where I was surveilled and accompanied by a handler most of the time, was a curated performance. To understand Gitmo, I realized, I would need to find a different way backstage.  For the past five years, I have relied on different open-source intelligence methods to explore the porousness of Gitmo and to follow the people who move through it. I spent one year watching the Joint Task Force scrub its own official Twitter feed of hundreds of tweets. (They subsequently deactivated the account, and I took over the handle.) I sat quietly for years with the knowledge that geolocation-based smartphone apps were a window into a military culture that most civilians will never see, and nodded my head when people told me that fitness trackers like Strava could reveal someone’s location on a military base. I knew that Strava was just the tip of the iceberg. I didn’t need to go to Gitmo to speak with personnel there; I could just turn on my phone.
I considered different platforms—Facebook, YouTube, Reddit—where current and former guards might hover. All seemed too public—except Tinder. And so, in the summer of 2017, I plugged in a little personal information about myself on the app, geolocated to Guantánamo, and began to chat with men who were stationed there. I ended up swiping right on private contractors, members of the Military Police, sailors who were just passing through the port. Meanwhile, I sat in my small apartment in Massachusetts trying to understand what precisely I was trying to understand about the detention facilities.  What I began to see through Tinder is that Americans would pass through the base and eventually return stateside. My new digital strategies were leading me to reckon with the fact that guards themselves were constantly returning after their rotations to communities throughout the United States, many slipping back into civilian life. Guards came, guards went, rinse, repeat. Through swiping, I could ask these people what they saw on the ground, and I could do what I had largely been unable to do at Guantánamo—learn their names, gain records of their faces, outline their moral codes, inquire what the detention facilities represented to them.  The responses I gathered included disavowals and defenses of the national security state—a diversity of perspectives absent in much of the public record. At the same time, I was trying to document the identities of former guards, though at a certain point I recognized that this alone wouldn’t cause me to reckon with the vastness of the US national security state represented by Guantánamo Bay. As the scholar A. Naomi Paik argues in her book Rightlessness: Testimony and Redress in U.S. Prison Camps since World War II, Gitmo is part of a longstanding and ongoing US project to create physical and legal black sites. A few truths told on Tinder couldn’t make for a global reconciliation.  Home Truths It took the UNC Charlotte students’ campaign, which I first heard about in late 2019, for me to reckon with my own lack of imagination. I had been so intent on using social media to map the identities, ideologies, and movements of former guards that I hadn’t considered what might transpire if their identities were to be widely known. The Zoomers had first learned about Bogdan’s history through LinkedIn—and then they led a concerted effort to fire him.
The Los Angeles Police Department’s Real-Time Analysis and Critical Response Division (RACR) is housed in a hulking, institutional-gray building about a mile north of downtown. Its only marking is its address: 500. In the fall of 2013, I had an early morning meeting there with Doug, a “forward-deployed” engineer from Palantir Technologies, which builds and operates one of the premier platforms for compiling and analyzing massive and disparate data used by law enforcement and intelligence agencies. He was one of about eighty people I interviewed over the course of five years to understand how the LAPD uses Palantir and big data.
Palantir’s goal is to create a single data environment, or “full data ecosystem,” that integrates hundreds of millions of data points into a single search. Before Palantir, officers and analysts conducted mostly one-off searches in siloed systems: one to look up a rap sheet, another to search a license plate, another to pull up field interview cards, and more still to search for traffic citations, access the gang system, and so on. Seeing the data all together in Palantir is its own kind of data.  Palantir’s clients—federal agencies such as the CIA, FBI, Immigration and Customs Enforcement, and the Department of Homeland Security; local law enforcement agencies such as the LAPD and New York Police Department; and commercial customers such as JPMorgan Chase—need training in order to learn how to use the platform, and they need a point person to answer their questions and challenges. That’s where engineers like Doug come in.  In a training room at the RACR, I watched as Doug logged in to Palantir Gotham, the company’s government intelligence platform, and pulled up the homepage (Figure 1). For him, it was a banal moment, but I’d been eagerly anticipating this sight: there is virtually no public research available on Palantir, and media portrayals are frustratingly vague.
Imagine the world as a strange apple. The depressions at both poles are deformed and deepened until they connect, turning into a doughnut. The skin of the apple, meanwhile, remains intact and can slide up and down the “hole” of the doughnut like an endless treadmill. The observer is situated somewhere in the hole, and what he sees is the ring-shaped world endlessly unfolding.
More fantastically, as the observer moves towards any point in the wall of the doughnut, the point would automatically open up, expand and surround the observer in a new doughnut-view. A perfect, self-organizing, fractal structure. Hundreds of passengers wriggled under Mimi’s wings, getting impatient.
Around the end of World War II, the first general-purpose electronic computers were introduced. The data being entered into these machines was numerical, it was subjected to mechanically coded mathematical formulae, and the output was a solved equation. An amazing innovation, and one that—provided the inputs were entered accurately, and the machine operations properly encoded—returned an accurate and reliable result. An increased focus on feedback soon enabled a process known as “machine learning” through neural networks—a technique that has been revived in the last decade to propel a new AI boom, driven by breakthroughs in computer vision and natural language processing.
Machine learning sorts through vast amounts of chaotic data and, using adaptive algorithms, closes in on specific arrangements of information while excluding other possible interpretations. But, in order for any computation to occur, a process of rationalization must first create machine-readable datasets. Real-world phenomena must be “datafied” by sorting it into categories and assigning fixed values.  Take, for example, most image recognition software. Whether the goal is identifying handwriting or enemy combatants, a training set of data made up of digital images—themselves encoded arrangements of pixels—is typically created. This initial process of digital image capture is a form of reduction and compression; think of the difference between the sunset you experience and what that same sunset looks like when it is posted to Instagram. This mathematical translation is necessary so the machines can “see,” or, more accurately, “read” the images.  But for the purposes of this kind of machine learning, further rationalization must occur to make the data usable. The digital image is identified and labeled by human operators. Maybe it is a set of handwriting examples of the numeral two, or drone images from a battlefield. Either way, someone, often an underpaid crowdworker on a platform like Amazon Mechanical Turk, decides what is meaningful within the images—what the images represent—so that the algorithms have a target to aim for.  The set of labeled images is fed into software tasked with finding patterns within it. First, borders are identified in the encoded arrangement of pixels. Then larger shapes are identified. An operator observes the results and adjusts parameters to guide the system towards an optimal output. Is that a 2? Is that an enemy combatant or a civilian?  Once this output has been deemed acceptable, the system is fed new, unlabeled images and asked to identify them. This new data, along with feedback on the functional accuracy of its initial output—“yes, that is a 2”—is used to further fine-tune the algorithm, the optimization of which is largely automated. This basic process applies to most machine learning systems: rationalized data is fed in, and through association, feedback, and refinement, the machine “learns” to provide better results.
In contrast to the digital frontiersman of Barlow’s declaration, Indymedia activists built a platform that prioritized communities. Within Indymedia, communities built their own trusted online spaces. Autonomous groups were then connected to others in a common network, with the aim of providing mutual support and mounting resistance to institutional power.  Opening Doors with Open Publishing When Indymedia was at its height between 1999 and 2006, new IMCs were going online at a rate of one every nine days. Many were started to support anti-globalization protests, like in Seattle in 1999. The Indymedia center in Miami, for example, started in 2003 in the aftermath of the Free Trade Area of the Americas meeting, when labor organizers, farm workers, and anti-globalization demonstrators descended on the city to protest the trade negotiations.  Though many IMCs formed in response to local anti-globalization actions, like the one in Miami, starting a new Indymedia site wasn’t bound to that movement. The IMC in Philadelphia, for example, emerged in preparation for the protests surrounding the Republican National Convention in 2000. Others were built as general-purpose outlets for local activism. One of the first projects of the San Francisco Bay Area IMC — later known as “Indybay” —  was a list of the forty-five worst slumlords in the city. Indymedia journalists compiled the list after interviewing and meeting with local tenants’ rights advocates in response to rising rents during the dot-com boom.
Whether an IMC was started to cover an anti-globalization protest or to serve as a community media outpost, one thing they all shared was a website with some level of “open publishing.” This meant it had a usable interface that made it relatively easy for anyone to post on the central newswire. Most Indymedia sites had three columns (similar to Facebook today). The left column had a menu for navigating to other local IMCs. The center column was a feed of stories, and the right column was usually reserved for submitting a post or listing calendar events. “It was the first self-publishing platform I had encountered,” Lee Azzarello, who started working with the Indymedia center in New York City in 2001 and helped with global Indymedia tech support, told me, echoing other IMC members I interviewed. “This was before WordPress existed and blogs took some expertise to start and use.”  Open publishing also opened the doors to abuse. “We had constant battles with trolls the whole time,” Mark Burdett, an Indymedia veteran and former colleague from EFF, told me in an interview. One key way that Indymedia sites dealt with trolling was by having an editorial policy: members who monitored the posts submitted to the newswire used the policy to decide what got promoted to the top. But as more people used Indymedia sites, the more the trolls and spammers did too. Indymedia organizers eventually built tools that automatically detected spam or hateful content to flag for review before it was allowed to go live.  Issues with trolling, however, never eclipsed the real appeal of open publishing: it provided an easy-to-use platform which non-tech experts could use to elevate their stories online. Activists had long recognized that skewed narratives and silences from corporate media were part of what they had to fight in order to mount political resistance. Even so, those who were in a position to write and publish stories to counteract the mainstream media were relatively few and far between, relying on community radio and public access television, newsletters, or individual blogs.
In the middle of the night on March 8, 2018, Feminist Voices was shut down on Weibo because of the “posting of sensitive and illegal information.” After a few hours, the WeChat account was also banned, under the vague charge of “violating relevant laws and regulations.” On its last day, Feminist Voices had 250,000 followers on both platforms. The next day, on March 9, 2018, the WeChat index—similar to Twitter’s trending topics—showed a significant increase in the popularity of the word "feminist," apparently related to the crackdown on Feminist Voices. Then, ten days later, a huge WeChat public account, whose daily theme was completely unrelated to feminism, issued a long article accusing Feminist Voices of being related to “criminal prostitution groups” and “outside reactionary forces.” This very sensational article quickly gained tens of millions of views online. However, when we tried to publish a rebuttal, it was removed after only 4,000 clicks. Any mentions of Feminist Voices’ legal work—such as our unsuccessful attempt to sue Weibo and WeChat in order to challenge the closure of our accounts—were banned, along with any articles or photos sent by our readers or supporters. WeChat shut down some supporters’ personal accounts, and Weibo even forbade users from using our logo as an internet avatar.
After that, the popularity of search terms related to feminism on the Chinese internet plummeted. Obviously, people had gotten the message that feminism was “unpopular” and should be taboo. This represented yet another front in the war that the state had been waging against feminism since 2015. However, this time, the means was no longer criminal investigation but online repression. Its purpose was to obliterate the social contribution of feminists, cut off our social network, abolish feminist actions’ legitimacy, and drive us out of the public spaces where we have been working hard for the past few years.
CH: On top of laying things out, we had to decide how to actually print the books. There’s a cost continuum in printing from print on demand, to digital printing, to offset printing. Print on demand is much more expensive on a per unit basis, and there’s not much discount really as you print more copies. Then there is digital printing, which costs less than print on demand, and doesn’t have a large setup cost, but there is a limit to how low the cost per unit gets when printing in bulk. Offset printing has a larger up-front cost, but then after that your cost per unit is really low. There’s an inflection point in terms of cost, which I think for us was around 1,000 copies, where below that it makes sense to print digital due to the lower setup costs, and above it makes sense to do offset because of the lower price per unit.
JF: We decided early on to just not do print on demand. I got advice from our friend Gabe Durham who runs Boss Fight Books. He used a lot of print on demand services early on, then eventually moved on to do larger print runs, both because printing individual books is expensive and the quality of print on demand was pretty variable. We decided to look for a printer that we could do small runs with. I ended up reaching out to folks at presses whose books I admired–Timeless Infinite Light, Song Cave, Justin Carder who ran Wolfman at the time. They were very super helpful and generous with their recommendations. A few of them recommended McNaughton and Gunn, which is a printer in Michigan, and ended up going with them, initially doing digital printing runs. We were pretty happy with the quality, especially the “luxury matte finish” covers–a lot of people commented on the silky hand-feel of the books.
If we’re going to require companies to pay a chunk of their data revenue into a fund, however, we first have to measure that revenue. This isn’t always easy. A company like Facebook, by virtue of its business model, is wholly dependent on data extraction—all its revenue is data revenue. But most companies don’t fall into that category.
Boeing, for instance, uses big data to help manufacture and maintain its planes. A 787 can produce more than half a terabyte of data per flight, thanks to sensors attached to various components like the engines and the landing gear. This information is then analyzed for insights into how to better preserve existing planes and build new ones. So, how much of Boeing’s total revenue is derived from data? Further, how much of a company’s data revenue can be attributed to one country? Big data is global, after all. If an interaction between an American and a Brazilian generates data for Facebook, where was that data extracted? And if Facebook then refines that data by combining it with information sourced from dozens of other countries, how much of the value that’s subsequently created should be considered taxable for our data fund? Measuring data’s value can be tricky. Fortunately, scholars are developing tools for it. And politics can help: in the past, political necessity has motivated the creation of new economic measurements. In the 1930s, the economist Simon Kuznets laid the basis for modern GDP because FDR needed to measure how badly the Great Depression had hurt the economy in order to justify the New Deal.
In recent decades, we’ve seen a greater destigmatization of mental health. People use terms like depression much more frequently than they used to. But I wonder how far destigmatization has really gone, if 76 percent of people don’t feel comfortable talking about the lived realities of depression while using their real name. As you said, the internet can clearly be a life-saving space. And much of its power comes from the ability for people to use a pseudonym on a major platform like Instagram to talk about depression. That’s why it can be counterproductive when platforms try to enforce enhanced identification measures like real name policies.
Real name policies are motivated, at least in part, by the idea that the ability to be anonymous or pseudonymous on the internet is a major contributor to online toxicity. But what your research reveals is that the same anonymity or pseudonymity can be a life-saver, since it enables kids to discuss mental health issues they wouldn’t feel comfortable discussing otherwise. And there may not be an obvious real-world space where they could have those discussions.
“I find that infuriating,” Christopher Mitchell, director of the Community Broadband Networks Initiative at the Institute for Local Self-Reliance told the magazine Next City. “Chattanooga has not only one of the best networks in the nation, but arguably one of the best on Earth and the state legislature is prohibiting them from serving people just outside of their city border.” Even more recently, the Tennessee state legislature passed the Broadband Accessibility Act, effectively a $45 million tax break for private telecoms like Comcast.
Arguments against Chattanooga’s municipal network and others like it usually take up a familiar anti-government line of attack. Critics often point to the high cost of building new broadband infrastructure, which Chattanooga partly covered with a hefty federal stimulus grant. They further argue that this level of state intervention hurts competition in the internet market, and could eventually result in poorly run public monopolies.
How did you first come across Beer? It was quite serendipitous that Stafford Beer had published his book Decision and Control in 1966, and that Fernando found it in a bookshop while he was visiting New York City. He brought it back to Chile, and a group of us read and discussed the book. At that time, I was a student of engineering. The cybernetician W. Ross Ashby’s “law of requisite variety” is central to Beer’s method for controlling complex systems. According to Ashby, variety is the only thing that can control variety. To control a complex system like an industrial economy in order to get it to perform a certain way, you must ensure that the relevant people, such as workers and managers, can respond to all of the possible scenarios that might prevent the system from performing that way.
With great elegance, Beer developed these ideas about how to manage complexity. They appealed to us, because we were not intending to have a centrally planned economy. We were intending to develop organizations that had the capacity to make decisions within themselves to support the development of the economy.
Are there other features you find especially problematic? Well, the Graph API was a super idealistic and profoundly dumb idea. What’s that? Facebook launched the Graph API in 2010 as an app platform for third-party developers. In exchange for developing apps for Facebook, developers received an extraordinary amount of data about Facebook users. From what I’ve heard, developers couldn’t turn off the data hose. You just got this stuff.
This was how the Obama campaign in 2012 ended up with the whole social graph of the United States—and probably beyond—after building a Facebook app. They probably had no idea they were going to get all that data. And they quickly had to figure out how to use it. It’s also how Cambridge Analytica obtained the data of more than eighty-seven million users.
The story, of course, is more complicated. Internet porn has become a big business, but amateur communities are thriving. Just as millions of little-known SoundCloud musicians didn’t prevent the rise of a megastar like Taylor Swift, the ready availability of free amateur erotic content on spawn-of-newsgroups megasites like Reddit coexists with the unstoppable rise of Pornhub. The spirit of the older internet endures there, albeit in a different form.
As for the performers, many are torn between endorsing the “success” narrative being pushed by Big Porn outfits like Pornhub and voicing very real grievances—some of them brought about, in their opinion, by the rise of the tube sites. In September 2018, Pornhub produced their first award show at a historic theater in Downtown Los Angeles, enlisting a large number of actresses and actors with the promise that the winners would be “decided by the viewing patterns of the site’s millions of visitors and loyal members.” Shortly before that, Kanye West had mentioned the site on Jimmy Kimmel’s late night talk show. West referred to “Pornhub” as a generic term for online porn, much like older people in the 1990s would call the internet “AOL.” Pornhub promptly scrapped their original plans for their event, and made West “artistic director” of the evening.
No other object is addressed so explicitly or at such length in our Constitution, no matter what you might think of either the Second Amendment itself or the convoluted history of its competing interpretations. And no other object is quite comparable as an icon of contested cultural identities and as a flashpoint for vicious partisan disputes.
Cognitive psychologists have documented how guns appear to activate our “affect heuristics”: when a senator holds a rifle up for a photo-op on the floor of Congress, or a researcher displays a picture of a gun to a subject in a lab, most people will have some kind of immediate and intense reaction, whether positive or negative—a knee-jerk response that belies our ability to dispassionately assess arguments or statistics. Meanwhile, when it comes to the ballot box, attitudes towards gun ownership have arguably become the single biggest predictor for party affiliation and voting preference.
The Robotic Reserve Army As socialist feminism usefully highlights, capitalism is dedicated to ensuring that as much vital labor as possible goes uncompensated. Fauxtomation must be seen as part of that tendency. It manifests every time we check out and bag our own groceries or order a meal through an online delivery service. These sorts of examples abound to the point of being banal. Indeed, they crowd our vision in virtually every New Economy transaction once we clue into their existence.
One recent afternoon I stood waiting at a restaurant for a to-go meal that I had ordered the old-fashioned way—by talking to a woman behind the counter and giving her paper money. As I waited for my lunch to be prepared, the man in front of me appeared astonished to receive his food. “How did the app know my order would be ready twenty minutes early?” he marveled, clutching his phone. “Because that was actually me,” the server said. “I sent you a message when it was done.” Here was a small parable of labor and its erasure in the digital age. The app, in its eagerness to appear streamlined and just-in-time, had simply excised the relevant human party in this exchange. Hence the satisfied customer could fantasize that his food had materialized thanks to the digital interface, as though some all-seeing robot was supervising the human workers as they put together his organic rice bowl.
A Continuum of Strategies For many people in the Global South, work has long been isolating and uncertain by design. As “low-tech” workers such as platform drivers build community and collective power, they are able to draw on different local histories of resistance, and different methods for negotiating the social and political tensions in their cities.
Often, in the analysis of gig worker power by academics and observers in the Global North, an absence of unionization is thought to indicate an absence of worker power. Unions in the Global South, though, are not seen as the only or best way to collectivize in these labor regimes. This is not to argue that workers in the Global South do not unionize or that unions are unhelpful. Rather, they exist on a continuum of strategies to reshape work conditions, build collective worker identity and engage in mutual aid. (The political economists Arianna Tassinari, Matteo Rizzo, and Maurizio Atzeni, among other scholars, have examined in depth the role of unions in precarious work conditions.) Recognizing why alternate modes of organizing exist, and who uses them tactically, may be key to keeping tech labor movements around the world inclusive and responsive to the needs, vulnerabilities, and politics of those who do not have the privilege of participating in visible direct action. To achieve solidarity, and to succeed, the movement must engage with the informality, power asymmetries, and inequities of class and caste that shape the conditions of work around the world.
When I went to Burning Man, that’s what struck me: I am in the desert. The desert of Israel, from the Bible, under the eye of heaven, and everything I do shall be meaningful. That’s a Protestant idea, a Puritan idea, a tech idea, and a commune idea. All of those come together at Burning Man and that’s one of the reasons I’m fascinated by the place.
Burning Man has many problems, of course, and I am distressed by many pieces of it. However, there was a moment I had during my first visit when I went two miles out in the desert and I looked back at the city and there was a sign that looked just like a gas station sign and it was turning, the way gas station signs do. It could’ve been a Gulf or Citgo sign, but it wasn’t. It was a giant pink heart. And for just a moment, I got to imagine that my suburbs back in Silicon Valley were ruled over not by Gulf and Citgo, but by love.
Under such an arrangement, countries are free to continue burning fossil fuels, so long as they offset their emissions. For this reason, many climate advocates have been critical of net zero as a goal. One common critique is that net-zero pledges won’t stop the continued extraction and combustion of fossil fuels. Further, they’re aimed at a future that’s far enough away that present-day leaders won’t be accountable for what happens. “Net zero by 2050. Blah, blah, blah,” Greta Thunberg told a summit of young organizers just prior to COP26.  It’s true that net zero is woefully insufficient. We also need to be talking about immediately phasing out fossil fuel production. But net zero is still a worthwhile transitional goal, because we don’t yet have all the technologies we need at scale for true climate repair. Climate change is a problem of stocks, not flows: we need not only to stop emitting carbon by switching to renewables, but to reduce the existing levels of carbon in our atmosphere. Net zero alone won’t get us there. Still, it gives us a way to buy time while we develop and deploy the technologies required for full decarbonization.
Yet net zero is harder than it looks. The difficulty isn’t just political—compelling countries and companies to make promises and abide by them—but epistemological. At the center of net zero is a knowledge problem: How do we know when we’ve gotten to net zero? Answering this question is surprisingly hard.  There are two main challenges. First, there is immense technical complexity involved in accurately measuring both positive and negative emissions. Take positive emissions: how many are embodied in the manufacture of a car? You could measure how much carbon is produced by a single car factory, but a car has around 30,000 parts. Those parts might be sourced from suppliers around the world, each with their own carbon footprint. Further, the parts use different raw materials, and the extraction and transport of those materials carry emissions of their own. And that’s only one factory; there are some 300,000 car manufacturing facilities in the US alone.
An Industry is Born Today, the computer security industry and its associated institutions are firmly established. Along with dozens of high-profile companies dedicated to diagnosing, fixing, and preventing breaches, many corporations hire specialized professionals to deal with a host of technical security issues. All of this activity adds up to a lot of money: analysts estimate the security industry to be worth about $170 billion.
This industry hasn’t always existed, of course. It had to be built. And among the most important contributors were underground hackers who enjoyed breaking into systems and sharing what they found with their peers. In the early days of computing, security largely revolved around control of physical access to shared machines. Even the idea of passwords was, for a time, controversial. While engineers developed computationally secure systems like Multics in the 1960s and 1970s, they were eclipsed by the popularity of Unix. The advent of computer networking revealed the shortsightedness of some of the design decisions baked into Unix, as the choices made in a pre-networked era left critical vulnerabilities for later hackers to exploit.  Hackers gathered on dial-in electronic Bulletin Board Systems (BBSes) in the 1980s, swapping tips on how best to gain access to the different types of computers connecting to the nascent internet. Some joined underground secret societies, competing with each other for control of more computers. Others played cat-and-mouse games with systems administrators, who often lurked in the same BBSes, learning how to defend their users.  But in the mid-1990s, after a wave of high-profile arrests and the growing demonization of hackers in the media, some underground hackers began to shift their focus. Rather than hoarding their knowledge within select circles, they began to publicize it more widely. Upon discovering vulnerabilities, they would notify the company that made the software. Yet these vendors frequently reacted by dismissing the flaws as “theoretical,” while telling their users that the only threat to their security came from the hackers themselves.  In response, certain hackers set out to “make the theoretical practical,” as the tagline of the legendary hacker group the L0pht put it. Throughout the 1990s, they worked hard to force tech companies to take security seriously. Some put pressure on vendors by sharing the vulnerabilities on “full disclosure” mailing lists like Bugtraq and in hacker zines like Phrack. This move was risky—and controversial—since it potentially put exploits in the hands of malicious actors. But supporters of this approach hoped the risk would spur a rapid response from the vendor, while also empowering individual systems administrators to ensure their own organizations were adequately prepared.
In the attention economy, successful platforms find a way to turn more and more of our fun into an opportunity to extract profit. Ironically, the people who make computer games have themselves become highly exploited. Platforms that claim to be public squares or even playing fields can in fact encourage actors to game them in bad faith. Trolls attacking the serious bite bite, then back off saying it was “just a joke.” Transgression repeated, in feedback loops, can transform into boredom. As data-driven porn converges on predictable categories, we search for our safety word: What do you say when your kink isn’t naughty anymore? 4.
If the end game of gamification is playing yourself, what would it take to reclaim play? Writers in this issue explore how homo ludens has put and might still put our instinct to better ends. How to build a better “bite”? As we gain new computational powers, we can use them to build and explore virtual alternatives to vanishing public space. We can tinker and jigger, attempting to build networks on entirely new principles.
The apex—or climax—of the device’s popularity and public visibility came in January 2014, when the machine was featured in the HBO late-night documentary Sex/Now. For only $200—which included free credits for streaming encoded videos over the RealTouch servers, lubrication cartridges, and cleaning fluid—men could buy the device, described on its Amazon product page as a “High-tech Interactive Virtual Sex Simulator.” However, the same month it debuted on HBO, AEBN—mired in an increasingly expensive patent dispute over the product—halted production of both the RealTouch itself and the vital replacement parts required to keep the device functioning properly. Having sold off the remaining units, the company posted a eulogy for the device, beaming with pride at the RealTouch for “cementing its place in the history of adult entertainment.” It might be tempting to dismiss the RealTouch as another failed attempt at realizing the far-fetched and perpetually deferred fantasy of teledildonics, or to giggle at the absurdity of both the device and its users, or to condemn it as a participant in an industry that objectifies women. But indulging such impulses prevents us from understanding what the RealTouch was—and why it matters. As Marshall McLuhan eloquently put it, “resenting a new technology will not halt its progress.” The RealTouch wasn’t just a complex technical system—it was also an economic strategy and a set of social relations among networked subjects performing a new form of digitized sex work. Each of these factors—the machine’s mechanics, its economics, and its affiliated labor practices—were oriented toward the goal of creating the cybersexual real: the promise to replicate the feel of sex by stimulating the senses of seeing, hearing, and—most crucially—touch. AEBN’s engineers worked hard to build a machine that functioned so effectively that it would make the wearer feel as if they were penetrating “the mouth, vagina, or anus of a real human,” and the company frequently boasted of creating “the most lifelike simulation ever.”
Over the centuries, technological innovation has made it possible to automate the production of an ever-growing number of goods, from clothes to chemicals to cars. With the RealTouch, AEBN aspired to automate the production of male orgasms. You Can’t Pirate a Rollercoaster When AEBN first hatched the project, they had over 100,000 pornographic videos in their stable, spanning an impressively wide range of genres. Like others in the adult video industry, however, AEBN faced the threat of declining revenues due to unauthorized copying and downloading. The company had compounded the problem in 2006 when it launched the streaming site PornoTube, intended to be “the YouTube of porn.” PornoTube was supposed to encourage users to pay for online porn. Instead, it sparked a number of imitator sites, where people reposted paid content from AEBN. CEO Scott Coffman would later call the decision to start PornoTube “the worst thing I’ve done since I was in the adult business.” So when inventor Ramon Alarcon approached AEBN with a wooden prototype of what would eventually become the RealTouch, the company saw it as a potential salve for a wound that had been, to some extent, self-inflicted. Alarcon held a Masters in Mechanical Engineering from Stanford. He had spent eight months interning at NASA all the way back in 1993 and five years working for the Immersion Corporation, a company based in San Jose that had cemented its status as a leader in “haptics”—the use of computer-controlled mechanical cues to provide the illusion of touch.
Looking at these tech elites who represent the new era of a rising China, I felt as if I were seeing something much larger played out in miniature. And I was forcefully reminded of several recent events that have sparked heated debates in China. Wolf Instincts On August 7, 2018, the founder and CEO of the Chinese search giant Baidu, Li Yanhong (also known as Robin Li), commented in a WeChat post about Google’s possible return to the Chinese market: “If Google decides to come back to China, we are highly confident that we will take them on again, and win again.” This comment triggered a vehement backlash online, with tens of thousands of people expressing discontent about the quality of Baidu's search results, especially about the deceptive ads that it promotes. Two years ago, a college student named Wei Zexi died as a result of delayed treatment caused by the so-called “Putian Medical Group,” which posted misleading ads for an ineffective form of cancer immunotherapy that were then promoted in Baidu’s search results. In the two months after this incident, the stock value of Baidu plummeted by over 15 percent, but even today fake medical ads still appear in Baidu Search, waiting to swindle users once more. On August 25, 2018, a twenty-five-year-old girl from Zhejiang was raped and killed by a car driver after she had used Didi Hitch (an app similar to Uber Pool or Lyft Line). Public opinion was especially incensed because this was the second instance of rape and murder on the Didi platform within the span of one hundred days. Didi is the biggest online car-hailing service provider in China, yet its product design, driver screening, and customer service all still had serious security vulnerabilities that had gone unresolved. Furthermore, a former executive was discovered to have said that Didi Hitch was designed to be a “sexy” social platform—“like a coffee shop, or a bar, a private car can become a half-open, half-private social space. It’s a very sexy application scenario”—which further fanned the outrage. Didi eventually decided to suspend and reorganize Didi Hitch in an effort to address the problem, but it could not stop users from uninstalling and boycotting the app anyway.
The third piece of explosive news happened during the Burning Man Festival. Liu Qiangdong, also known as Richard Liu, the founder of the online retailing giant JD.com, which has a stock market value of $310 billion, was embroiled in a sexual assault scandal following a night of lavish eating and drinking at a Japanese restaurant in Minnesota. As a result, from August 31 to September 7, 2018, JD.com’s share price plummeted from $31.30 to $26.95 and the company’s market value evaporated by 43 billion RMB. Although the scandal was unrelated to the services of the company, it nonetheless gave rise to carnivalesque visions of the lifestyles of Chinese tech entrepreneurs as well as a significant critique of this nouveau riche class. In the past twenty years, the Chinese tech industry has experienced explosive growth. Terms from the famous sci-fi novel The Three-Body Problem like langxing (“wolf instinct,” as in The Wolf of Wall Street), yeman shengzhang (“savage growth,” as in, “That was savage, man!”) and jiangwei gongji (meaning a blow so powerful that it flattens your opponent from three dimensions to two dimensions) have become popular among Chinese tech entrepreneurs. They act as the first generation of pioneers journeying into the virtual New World. They imagine themselves as packs of wolves in the Mongolian plains who can only survive and emerge victorious through bloody combat, incessantly stalking new territory and prey. Objectively speaking, China’s technology companies have indeed greatly promoted technological progress in China and even around the world. According to an unpublished report by the China Development Research Center, from 1995 to 2015 nearly 80 percent of Chinese R&D expenditure came from private tech companies rather than the government, a percentage significantly higher than in developed countries such as the US, the UK, and France, where it hovers around 50 to 60 percent. In most Chinese cities, cash is seldom used, since in everyday life most consumers use mobile payment apps through their smartphones. Even street peddlers selling roasted sweet potatoes hang a card with a QR code to scan for payment. Concepts like AI, virtual reality, blockchain, and genetic editing have become deeply rooted in the public consciousness through relentless coverage in the media. Chinese people love technology, trust technology, and rely on technology. While fully (or even excessively) enjoying the convenience brought by technology, they have consciously or unconsciously forgotten about its possible negative impacts, such as infringements upon personal privacy and being misled by inaccurate data.