prompt
stringlengths
300
3k
completion
stringlengths
300
3.12k
I want to ask you about social media and “cancel culture.” In July 2020, Harper’s published “A Letter on Justice and Open Debate,” signed by a number of prominent people, including Noam Chomsky and J.K. Rowling. The letter criticized “an intolerant climate” on the Left, and in particular, “an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty.” The following month, in August 2020, The Atlantic published a piece by Anne Applebaum called “The New Puritans,” that used Nathaniel Hawthorne’s The Scarlet Letter to criticize social media “mob justice.” The irony of invoking the white woman’s public humiliation for being pregnant out of wedlock is that the book was published more than a decade before the Civil War. Black and Indigenous peoples’ ongoing bondage and claims to liberation are as unnamed in the book as they are in today’s epistles of moral panic.
But how do we negotiate this issue? Is calling people out on Twitter our only mode of addressing power dynamics in the AI ethics space? How can we put forward a vision that is constructive and not just reactive, even though our operational capacity is so low, even though we’re all exhausted, grieving, and torn into so many different directions? What is our vision for transformative justice in the context of knowledge production? MW: Look, I have a lot to say here. First, I think there’s a visibility bias: people see when calls for accountability and redress spill into the public. They rarely see the agonizing work of trying to hold harmful people and institutions accountable behind the scenes. Work that’s too often not only unrewarded, but actively punished. Like many people, I have engaged in a number of accountability processes that didn’t end with Twitter callouts and are not visible to the public. In my experience, Twitter is always a last resort. There are failures upon failures upon failures within these institutions and with the way power moves within them, all of which happen before someone is going to take to Twitter or call on social media as a witness. Timnit taking to Twitter didn’t save her job.  Buried in the moral panic around “cancel culture” is a burning question about how you hold power to account when you’re in an institution that will punish you for doing so. What do you do when your wellbeing and duties of care dictate that you confront and curtail harmful behavior, but you know that any such attempt risks your livelihood and institutional and professional standing? Institutions protect power. Universities don’t want to touch a star professor who’s bringing in press and grants; tech companies have every incentive to coddle the person architecting the algorithm that is going to make them a shit ton of money. These corporations and corporate universities are structured to protect flows of capital and, by extension, to protect the people who enable them. There are infrastructures in place—including HR and most Title IX offices—to make sure that those who enable the interests of capital are elevated and to make sure that it’s as painful as possible for the people who might report anything.
There’s a Shakespeare program on the inside. There’s a radio show. There’s a newspaper. There’s a lot of men really working on themselves and preparing to go back out to the community and become productive citizens. If people understood that, they might be willing to give these men a second chance. A lazy sprawl of brick and mortar straddling the Tennessee River in orange and beige: at first glance one could be forgiven for mistaking Chattanooga for any number of landlocked manufacturing towns. Like many of its postindustrial relatives, this city of 174,000 is in the midst of a protracted and irreversible economic transition. In the past two years alone, Dupont, Alstom, and MetalTek all shut down manufacturing plants that once employed thousands of people across the surrounding Hamilton County, where economic anxiety runs high and Trump won by sixteen points.
But Chattanooga doesn’t quite fit the tired narrative evoked in the president’s grim portrait of “rusted-out factories scattered like tombstones across the landscape of our nation.” This is a city with a plan. Situated in the heart of the Great Appalachian Valley, Chattanooga is widely known by a silicon-tinged moniker that sounds a bit more Santa Clara: “Gig City,” a reference to “the Gig,” the city’s municipally owned fiber-optic network. Funded in part by a $111 million federal stimulus grant and maintained by the Electric Power Board (EPB), Chattanooga’s public electric utility, the Gig’s ambitions feel more collectivist—and more fundamental—than the superficial “disruption” on offer from private-sector techno-utopians.
Yet when asked for details on how his service worked, Patterson shrugged off the question. He told the London Times in 1972 that even if his clients had nothing else in common, at least they had in common the fact that they had all joined Dateline. But had they? In 1969 he was arrested and convicted of fraud and conspiracy for trying to sell a list of young women to men who were looking for prostitutes. Patterson assured the men that all of these women were “good to go.” Whether these women had signed up for his service, or whether he had simply collected their names out of the phone book, was unclear. What was clear was that the women did not know he was using their names in this fashion.
Though Patterson was convicted and somewhat disgraced, this setback didn’t deter him. But it didn’t seem to teach him much, either. Throughout its existence, a veneer of sleaze plagued Dateline. Women customers often complained of being matched up with men they had nothing in common with, or whose questionnaire preferences were in direct opposition to theirs. This was ironic, considering how thorough Patterson’s questionnaire was—it asked people for information that would be “Big Brother’s Dream,” in the words of the Times of London. By contrast, Joan Ball’s more conservative and much shorter questionnaire seemed to result in better matches.
By the time Ting left New York, the city had become a viral hot zone. Before she departed, Taiwanese authorities required that she fill out an online health screening form, providing her medical information and, most importantly, an address where she could quarantine in Taiwan. Upon landing, all passengers underwent testing, the results of which were given two days later. There was also a changing area where passengers could change out of their plane clothes. Alcohol was provided for full-body sterilization. Ting passed through immigration and was escorted to a special quarantine taxi, to take her and other travelers to their quarantine locations. She was required to spray herself down with alcohol again before entering the cab.  When she arrived at her apartment, she was called by a local healthcare official (衛生所) assigned to her. They exchanged contacts on LINE, a popular messaging app in Taiwan similar to WhatsApp, to stay in touch. She also had to join a special LINE group, where she was expected to report her temperature and wellbeing twice daily, at 9 a.m. and again at 3 p.m. Soon after, on that same day, a government worker arrived at her door with a bag of supplies. This included garbage bags, extra masks, and some food. She signed some paperwork that stated she would commit to the entire fourteen-day quarantine process without leaving her apartment. If she had not owned a cell phone, the government would have provided one for her.  How would the government know whether Ting kept her quarantine? The authorities use mobile phone location data and cell tower triangulation to draw a “digital fence” boundary around an individual’s home. If you step outside of this zone, or if you turn off your phone, an alert is sent to the police and local health officials. Ting tells me a story of a friend whose phone shut down suddenly while in quarantine. Within a minute, the police knocked on his door.  When I asked Audrey Tang, a former computer programmer and tech entrepreneur who now serves as Taiwan’s first Digital Minister, about the privacy concerns raised by the digital fence, she was quick to point out the ways in which the program differed from surveillance tools used in other countries, such as smartphone apps or physical bracelets. Tang believes the Taiwanese approach is far less harmful. “First, it’s not GPS,” she says. “We are not asking you to install an app that reports GPS.” The level of location specificity provided by such information would be unnecessary for enforcing home quarantines, she explained.
Moreover, telecommunication companies are already collecting the data being used to construct the digital fence. “This is not new information being collected,” Tang emphasized. The emergency warning broadcasting system, which sends texts about flash floods or earthquakes, relies on the same data. “Instead of collecting new data or requiring you to install a new app,” Tang said, “we repurpose existing data and existing notification mechanisms.” Most importantly, the program is sharply circumscribed: the data is only used to implement the home quarantines, Tang insisted, and nothing else.  Fork the Government Technological solutions have not only come from the top down, however. They have also come from the bottom up. This is due in large part to Taiwan’s uniquely robust civic technology community, known locally as g0v (pronounced “gov zero”). In its current form, g0v grew out of the Sunflower Movement in 2014, a student-led protest sparked by concerns that a trade pact with mainland China would give Beijing more influence over Taiwan. But protesters also demanded more transparency and accountability from the Taiwanese government—demands that were taken up by the g0v movement.
Where once you might have had to do some research and guesswork to see how much your home cost, the Zestimate claims to make checking in on your home equity as easy as looking up the weather forecast. But why stop at your home equity? A teenager once told me that, at his school, kids would look up the Zestimates of each others’ homes, making fun of kids whose families’ estimated home values were low. I was never able to fully confirm this story as a broad trend, but a quick glance at Twitter and TikTok, which is rife with anecdotes from people whose prospective romantic partners attempted to impress them by sending Zillow links to the properties the suitors owned—and people who did their own Zillow research into the living spaces occupied by potential partners—gives me no reason to doubt it.
Yet despite the satellite-recon dreams of its founders, Zillow is often wrong, and its estimates can fall drastically short of a property’s ultimate market price. Shortly before the pandemic, Zillow began a program known as iBuying, using its vast troves of housing market data to automatically predict which houses it could buy and flip for a profit. The algorithm was off, and the company suddenly found itself loaded with properties it was unable to profitably sell. In late 2021, Zillow shut down the iBuying program after getting out over its skis, taking a $540 million write down and laying off 25 percent of its staff.
JS: I am uncomfortable with lying in general, even when it isn’t harmful, or if it’s just an exaggeration. I had a couple of advisory conversations recently where I was asked whether I would ever do Reboot full-time as a nonprofit. I told them, “Oh, I feel like I couldn’t get the money to do it.” And they were like, “No, you can get the money, that’s easy. But in order to get the money, you have to say you’re gonna scale Reboot to 20,000 people and 200 schools.” On the subway home, I was just like, do I want those things? Would I enjoy the version of Reboot where I was trying to scale it 100 times?  JW: I want to make a distinction. When I use the word “lying,” I mean saying things that are not true to one’s self. And this doesn’t have to be an outright lie. It can be something that is factually true, but misaligned with who you are.  I remember calling some friends as I was getting ready to raise a round. Their advice was: “Act like Mark Zuckerberg.” Apparently Mark had a reputation of being disrespectful to VCs and telling them he didn’t need their money. That shocked me. And I think it’s because being disrespectful goes against my core values. I’ve never intentionally disrespected someone. And also, I couldn’t imagine telling VCs I didn’t need their money. What a concept. Of course I need their money! What sort of world are you coming from? Like, how can you not need money? And then I was looking at Zuckerberg and realized, oh yeah, his dad gave him a $100,000 loan to help start Facebook. He didn’t need other people’s money—at least not in a food-on-the-table type way.
Second Languages You all had to learn to present yourself a certain way in order to be maximally appealing to the gatekeepers of the industry, whether it was VCs who might fund your startup or companies that might hire you. But how did you learn to perform these roles? Was it just a process of trial-and-error, or was there a more systematic way that you taught yourself the cultural protocols of the Valley?  JS: When I first got to college I had imposter syndrome, because I met so many people who were child prodigies. They were on magazine covers. I hadn’t done anything. I felt depressed. I also noticed that everyone in Silicon Valley was speaking a language that I didn’t understand. Like literally, I didn’t know what the words meant. So for a year I read a bunch of books that seemed to be the Silicon Valley canon: Zero to One by Peter Thiel, The Hard Thing About Hard Things by Ben Horowitz, Hackers: Heroes of the Computer Revolution by Steven Levy, Radical Candor by Kim Scott. I read whatever I thought other people were reading. Because I didn’t have a network yet—I didn’t have a way to learn directly from other people. But I wanted to know how the Valley thought.
The Black internet has a long history. It has multiple points of origin, as Charlton McIlwain has documented—from Afronet, a BBS network for Black users, to NetNoir, an AOL-based portal “devoted to Afrocentric material,” both of which launched in the mid-1990s. Today, the Black internet has entered the platform era, distributing its riches across Twitter and Instagram and YouTube.
What would it mean to take the Black internet seriously? What would it mean to see Black digital practices (in all their diversity) not in pathological terms—as hailing from the wrong side of a “digital divide”—but as creative, joyful, affirming? What if the Black internet offers a standpoint from which the rest of the internet can be seen, and critiqued, more clearly? These are some of the questions that guide the work of André Brock, Jr., associate professor of Black Digital Studies at Georgia Institute of Technology and the author of Distributed Blackness: African American Cybercultures. In his work, Brock uses a methodology that he calls “critical technocultural discourse analysis” (CTDA). “It decenters the Western deficit perspective on minority technology use to instead prioritize the epistemological standpoint of underrepresented groups of technology users,” he writes, with the aim of conducting “a holistic analysis of an information technology artifact and its practices.” In other words, CTDA asks, why do people do what they do on the internet—especially when “people” are not just white, cis, heteronormative men? Central to CTDA is the idea of the “libidinal economy,” which originates with Freud, was further developed by the French philosopher Jean-François Lyotard, and has more recently been taken up by Fred Moten, Frank B. Wilderson III, and other Black thinkers. A libidinal-economic approach emphasizes the role of emotional and psychological intensities in driving anti-Blackness, rather than the more rationalist models of human behavior derived from political-economic approaches. Issue editor J. Khadijah Abdurahman sat down with Brock to trace the history of disinformation from Reconstruction to the present, and to discuss “the unholy trinity of whiteness, modernity, and capitalism.” How do you see the state of mis/disinformation research? What do you think is missing from the conversation?  Disinformation is only perceived as bad when it serves to disrupt the interests of whiteness and white power. White power sounds strong, but it fits. During Reconstruction, the country found all sorts of creative ways to keep black folk from the polls, up to and including murder. That wasn’t a problem. Du Bois documented this extensively in Black Reconstruction, but misinformation against non-whites is typically a footnote in history texts and media reports as it serves the telos of American democracy.
Misidentifications can also be a problem, even though companion apps for community scientists generally require multiple identifications before an observation is confirmed, and some apps like iNaturalist use computer vision to suggest identifications. In an effort to ensure the quality of their dataset, scientists using crowdsourced observations for research may treat the number of contributions a user has made as a proxy for data quality, and filter out users with a weak contribution history.
Since the charisma, visibility, rareness, and location of a given species can all affect data collection in ways that don’t necessarily reflect the species’ actual distribution, it can be difficult to determine whether an absence of observations corresponds to a real decline of the species or something else. Some platforms try to account for this in different ways, but others, wary of user attrition, are hesitant to add barriers to submitting observations. After all, what matters most to the platforms is attracting and retaining users. Helping out scientists is a secondary concern.
Neurath believed that such a process would enable a particular kind of rationality to emerge. Even where it proves impossible to make clear and precise calculations, he argued, we can still decide rationally. However, the rationality we deploy will be a practical and political rather than purely algorithmic. People will have a chance to voice both their concerns and their desires, before arriving at collective decisions about how to shape, constrain, and direct the production process. They will balance how much they want to consume against how much they want to work. They will weigh their need for energy to heat their homes and power their workplaces against values of ecological sustainability and intergenerational justice. They will decide how much of their time and resources would be set aside for expanding or transforming production and how much for cultural, athletic, and intellectual activities.
In Neurath’s model, decisions made collectively, at the highest level, would then filter down through the rest of the economy, to be implemented across various industries and workplaces. But how would that work exactly? How are local production decisions made? What happens if conflicts or collisions arise—for instance, between the decisions of society as a whole and the demands of workers in pencil factories, producing goods to meet society’s needs?   These complexities suggest that what we need is not one society-wide protocol but many protocols—many structured forms of communication that enable people to reach decisions together. Algorithms would have an important role to play. They would codify what philosopher John O’Neill describes as “rules of thumb, standard procedures, default procedures, and institutional arrangements that can be followed unreflectively and which reduce the scope for explicit judgements,” streamlining the planning process so it doesn’t become an endless series of meetings. At the same time, we would need some set of rules for how to tie all of the protocols together, and to integrate them with the algorithms, in order to create a unified planning apparatus based on software that is easy to use, transparent in its outcomes, and open to modification.  After all, even if we incorporate qualitative goals into our planning, we still have to solve the socialist calculation problem. Producers still have to make decisions that add up into a coherent production plan.  Freely Associated Producers Neurath’s emphasis on democratic decision making was essential. But by proposing the idea of the protocol, he raised more questions than he could answer, especially with the limited technologies available to him at the time. Towards the end of his life, Neurath spent years trying to determine how semi-literate peasants and urban workers could be incorporated into a planning protocol, via the distribution of simple graphical representations that he called isotypes.
The likelihood of very little interference is why Green Bank was selected in 1954 as the location for a new radio astronomy observatory facility. It’s close to Washington DC, yet sparsely populated and conveniently protected on all sides by the Allegheny Mountains. But just in case, lawmakers drew a square on a map of the West Virginia-Virginia border, and the National Radio Quiet Zone was born: 13,000 square miles within which all devices that emit microwaves and radio waves on a large scale are severely regulated. Cell towers and radio stations do exist throughout the Quiet Zone today, but they must be directionally pointed away from the telescope and every request for a new one is subject to observatory approval. Within a ten-mile radius of Green Bank, however, cell towers and Wi-Fi networks are banned.
We are told that when we board the school bus that will take us to the telescope, our cell phones must be switched off. Even though there is no cell reception here, our phones are tiny transmitters that will keep searching for a signal, emitting radio waves all the while. Dave steps up onto a thin strip of stage and wheels over a cage connected to a small screen. The cage is a Faraday cage, which blocks all electromagnetic radiation, and the screen is a spectrum analyzer which shows how much radiation a given object is spitting.
The 1990s were a heady time, but not everyone was convinced. If Wired was the pulpit for a new gospel of venture-funded tech, Schultz’s mailing list, called Nettime, was an effort to build a home for the early commercial internet’s discontents. Drawing on various contemporary anti-capitalist currents, from the anti-globalization movement to Italian autonomism to Berlin’s lively squatter movement, Nettime aimed to synthesize an alternative to the techno-determinist optimism oozing out of Silicon Valley: the worldview that media theorist and Nettime regular Richard Barbrook named the “Californian Ideology” in a 1995 essay co-written with Andy Cameron.
The Californian Ideology, through its “bizarre mish-mash of hippie anarchism and economic liberalism,” celebrated the rampant commodification of digital networks as a force for personal liberation. The 1990s are often remembered as a time in which this vision of the internet went unchallenged, but the Nettime crowd wanted to chart a different path forward. They developed theories of digital culture, pioneered tactics for new media activism, and wrote ground-zero critiques of the commercial internet as it took shape around them. The mailing list itself was also a platform for experimental forms of collaborative writing that tried to embody a different experience of being online. At a moment in history when profiteers and privatizers were terraforming the internet into the market-saturated system we know today, the Nettime circle gestured toward a more collective, less commodified alternative—but only vaguely.
I remember looking for rent stabilization data when I lived in New York, and the closest I found was this project, AmIRentStabilized.com, which is someone’s personal project to help people do this. The city has that data and could share it with tenants, but they don’t. Erin: Totally. Azad: And New York is a special case where there are different levels of rent stabilization and rent control, and the property owners know the status of their building, but it’s not easy for tenants to look up that information themselves.
The website walks you through the process of contacting a city office, which is then supposed to mail you the answer, but if they don’t, you’re supposed to set a calendar reminder for yourself to follow up with them! Azad: That’s because these institutions are beholden to property owners who fight tooth and nail for data about them to remain inaccessible. And with open data, as soon as the data gets politicized, it gets pulled. We just saw this in Chicago, where there were some open datasets about policing in the city—several reports were written about them—and then that open data got removed. The neoliberal fantasy that motivated the open data movement in the first place was like, “If you just open the data, people will make great business products out of it!” But it’s proven to be much more complicated.  EvictorBook shows historical eviction data for a given building—data that’s otherwise difficult to find and piece together.
A middle-aged Uyghur businessman from Hotan, whom I will call Dawut, told me that that, behind the checkpoints, the new security system has hollowed out Uyghur communities. The government officials, civil servants, and tech workers who have come to build, implement, and monitor the system don’t seem to perceive Uyghurs’ humanity. The only kind of Uyghur life that can be recognized by the state is the one that the computer sees. This makes Uyghurs like Dawut feel as though their lives only matter as data—code on a screen, numbers in camps. They have adapted their behavior, and slowly even their thoughts, to the system.   “Uyghurs are alive, but their entire lives are behind walls,” Dawut said softly. “It is like they are ghosts living in another world.” Almost every day, I receive an email from Google Alerts about a new article on China’s “social credit system.” It is rare that I encounter an article that does not contain several factual errors and gross mischaracterizations. The social credit system is routinely described as issuing “citizen scores” to create a “digital dictatorship” where “big data meets Big Brother.” These descriptions are wildly off-base. Foreign media has distorted the social credit system into a technological dystopia far removed from what is actually happening in China. Jeremy Daum, a legal scholar at Yale Law School’s China Center, has suggested that part of why the misreporting persists is because the United States and Europe project their fears about extensive digital surveillance in their own societies onto China’s rapid technological rise. Compounded by the rhetoric around a US-China “arms race” in developing artificial intelligence, the idea that China might somehow perfect an exportable model of a totalitarian surveillance state has made people more willing to believe exaggerated accounts of the social credit system.
In response to the misreporting, several researchers have attempted to correct the narrative with well-documented examples of where foreign press coverage gets things wrong. Common mistakes include the assumption that all surveillance technology in China feeds into a centralized database, that every recordable action is assigned a point value and deducted from a comprehensive score, and that everyone in China receives such a score.
The case of Cloudflare clarifies the stakes of choosing different paradigms for security. At some point, technical questions about how vulnerabilities are identified and mitigated collide with questions about how technical security relates to other forms of security. When is a site like 8chan benefiting from technical security that enables its members to make other communities less secure? Drawing on the insights of hacktivists, anti-security hackers, and digital security advocates, we always have to ask who security works for—a sociotechnical line of thinking that computer security professionals, often keen to avoid anything that might be seen as political, may find uncomfortable.  We also have to foreground the role of the profit motive in making platforms prone to abuse. Data mining, corporate ownership of user data, targeted advertisements, design and policy decisions to maximize user engagement—these can all create opportunities for bad actors to cause harm. Put simply, in many cases the business model itself might be the foundational vulnerability. But such fundamental product and business issues are generally outside the scope of what technical security researchers working for these organizations are paid to identify and remediate.
After two more 8chan-linked shootings, Cloudflare ultimately caved to public pressure and cut ties with the site. But a million similar issues are raised on a smaller scale every day, where the question isn’t whether to host a single site, but how to treat a particular piece of content, or a feature that allows that content to be promoted to a particular person. These issues can’t be adequately addressed simply by tweaking interaction algorithms, removing “like” buttons, or developing better content moderation protocols.  A reassessment of what is involved in “security” is required. And, as with the 1990s hackers, this notion of security must be built, at least in part, by people who aren’t afraid to pick a fight. What made an earlier generation of hackers so effective wasn’t just their technical expertise but their willingness to antagonize the big software vendors. It was through their efforts that we now enjoy a modicum of technical security. Standards and protocols that protect consumers and citizens from harms like infrastructural sabotage, identity theft, and commercial fraud exist because hackers aggressively drew attention to corporate incompetence and demanded accountability.
The best way to explain it is to tell you about the first tool we ever made. For our campaign in 2015 with borrowers who had attended Corinthian, we worked with lawyers to develop an app like TurboTax to file for defense to repayment.  TurboTax is easy. A little wizard comes up and asks you questions, you answer them, and, at the end, it tells you what you owe. We thought, “What if we made something similar?” So we developed an app that submits a letter to the Department of Education that says, “You lied to me and here’s the evidence. I want my loans canceled.”  Based on the success of that tool, we decided to replicate that model with other types of debt on the new platform. So now we have six tools. But what’s funny is that when you click on the “Defense to Repayment” link from the tools landing page, it just takes you to the Department of Education website because they’ve taken over the process. We started it and they sort of… stole it.  Seriously? Yeah. A hundred thousand people filed for defense to repayment in the year after we launched our tool. The Department of Education couldn’t allow us to do that. Who were we to submit these letters en masse? So they created their own web app, which looks suspiciously like ours. Now borrowers have to go to the official Department of Education website to file. But the process of applying didn’t exist before we invented it.  They co-opt your work, they don’t pay you, and they don’t credit you in any way. It just blends into some federal website.  The federal bureaucracy takes it over.  There are so many cases like that in the history of labor organizing as well. Victories that are hard won just get sucked into existing systems and made to appear as if they were always there.  It erases the struggle that led to the change. And, like I said, about a billion dollars in debt have been erased now due to the operationalization of that law. The law had existed, but people weren’t using it. A billion dollars is a drop in the bucket, but nevertheless a good sign that organizing in this way can be effective.
You mentioned that a campaigning feature is coming to the platform. How do you envision campaigns coming together? The intention is to organize democratically, so people on the platform determine the campaigns. What if Debt Collective members in some city found each other on our platform and decided to do something together locally? Maybe there’s a payday lending store you want to shut down, so you work with your neighbors and have the support of our organizers and our resources. Maybe there’s a predatory for-profit college, a hospital that’s gouging people, courts and police who are sticking people with fines and fees. The bonds of solidarity can be created at the local as well as at the national level.
Much discussion of social inequality and class divisions in Australia treats these issues as phenomena that occur elsewhere. Home ownership is seen, somewhat uncritically, as an inviolable fixture of the Australian dream rather than a tax racket for the rentier class. Pouring scorn on “dole bludgers” and shaming welfare cheats has long endured as a national sport. Anxiety about welfare dependency persists, even though it is at historically low levels.
In June 2017, the government released a report of the nation’s top ten suburbs for welfare non-compliance—or “bludger hotspots,” to use the parlance of a News Corp tabloid. These are places where recipients failed to meet the conditions for receiving welfare, which include attending interviews and appointments. It’s difficult to see such a move as anything other than a way to shame and stigmatize these communities.
Li Yinghui, a thirty-nine-year-old housewife in the village of Heyang, Zhejiang, agrees. “Wages around here aren’t high enough for me to be shopping online every day,” she told me. Low wages aren’t the only factor that constrain Li’s capacity to participate in e-commerce. “My phone can’t even run most [apps]; it’s a cheap one that I bought for 1,000 yuan ($150),” she explained. Her fifty-six-year-old neighbor, Zha Xiaolong, is even less interested in letting mobile technology change his habits. “What’s an app?” he asked as he squinted over the top of his screen. “I only use WeChat.” These are the obstacles that PDD has overcome so successfully to become a major platform for rural e-commerce. A 2018 study estimated that 40 percent of PDD’s users live outside even “fourth-tier” cities in China. “Those inside [Beijing’s] Fifth Ring Road won’t have heard of us,” PDD’s founder Colin Huang told the magazine Caijing, arguing that he was bringing the government’s vaunted “consumption upgrade” to the countryside. Upgrading consumption “doesn’t mean letting people in Shanghai live like Parisians,” he explained, “but letting the people of Anqing in Anhui province have paper towels and tasty fruits.” Viral Vulgarity From the government’s perspective, building a consumer society in the countryside is essential for economic development. Platforms like PDD are clearly serving that goal. Yet if you give people tools for participating in the national economy, you have to expect them to use them for their own ends—and this often gives visibility to economic conditions and desires that contradict the image of the modern nation that the government wants to project.
The urban-rural gap in China is not just an economic divide in consumption power but a cultural divide based on consumption habits. Rural residents don’t just consume cheaper goods. They consume different kinds of goods—including, in some cases, ones that the government may not approve of. This is no less true in online media than in e-commerce. If counterfeit goods on e-commerce platforms popular with rural users pose a challenge to the government, so too does “vulgar” content on social media apps popular with the same demographic. In both cases, the successful push to bring the countryside online has yielded unexpected consequences that the government is struggling to control.
You can make money for Barry Diller while you sit on the bus. You can make money for Barry Diller while you sit on the toilet. When you tell the internet what you want, the internet remembers. Somewhere, a company is building a library of every longing on earth. A record of every fetish, every crush, every passionate and perverted thought persists on a hard drive in a climate-controlled room in Virginia or Dublin or Singapore.
What an erotic, and terrifying, vision: our desires all crammed together, sharing the same strips of disk, indefinitely. My dick pic next to your love letter, your Google search for tentacle porn next to my flirtatious Facebook message. One soup of sexuality, expanding at the speed of human thought.
More public oversight is welcome, but insufficient. Regulating how data is extracted and refined is necessary. To democratize big data, however, we need to change who benefits from its use. Under the current model, data is owned largely by big companies and used for profit. Under a more democratic model, what would it look like instead? Again, the oil metaphor is useful. Developing countries have often embraced “resource nationalism”: the idea that a state should control the resources found within its borders, not foreign corporations. A famous example is Mexico: in 1938, President Lázaro Cárdenas nationalized the country’s oil reserves and expropriated the equipment of foreign-owned oil companies. “The oil is ours!” Mexicans cheered.
Resource nationalism isn’t necessarily democratic. Revenues from nationalized resources can flow to dictators, cronies, and militaries. But they can also fund social welfare initiatives that empower working people to lead freer, more self-directed lives. The left-wing governments of Latin America’s “pink tide,” for instance, plowed resource revenues into education, healthcare, and a raft of anti-poverty programs.
The ruling elite built samara.kg to get Jeenbekov elected, and he remains in power. Still, the fallout from the scandal may come back to haunt the government. Rinat Tuhvatshin, executive director of Kloop Media, who was summoned and questioned by Kyrgyz security services for three hours for his team’s investigation into Samaragate, is convinced of it. If the country’s current leadership is pushed out, a new round of leaders may use Samaragate “as some sort of leverage against people who were formerly in power… [S]ome new president will say: ‘Ok, what do we have on Samara? Let’s use it against our predecessors.’” Medet Tiulegenov, a professor of international and comparative politics at the American University of Central Asia in Bishkek, Kyrgyzstan’s capital, says the scandal’s implications are even broader. The samara.kg story “has wide ramifications for democratic development in Kyrgyzstan,” he says. Biometrics were supposed to strengthen the integrity of elections. Instead, they have been used to undermine it: The country has developed electoral procedures that minimize electoral fraud mostly due to the introduction of biometric identification of voters and voting with electronic scanners. Samaragate, however, puts in question this achievement. Moreover, a new challenge to civil society might be the difficulty of tracking and monitoring possible manipulations of electoral data in the future.
Digital Distortion Samaragate is just one example of how technology can warp elections, especially in fragile democracies. There are many more cases—most of which will not make it into local news, much less receive global attention—of governments in poor countries using technology to distort the democratic process—not only in Central Asia but in countries like the Philippines, Myanmar, and Cambodia. And these cases are unlikely to be addressed in any meaningful way.
What is at the center of my attention are land and water sovereignty struggles, such as those over the Dakota Access Pipeline, over coal mining on the Black Mesa plateau, over extractionism everywhere. My attention is centered on the extermination and extinction crises happening at a worldwide level, on human and nonhuman displacement and homelessness. That’s where my energies are. My feminism is in these other places and corridors.
Do you still think the cyborg is still a useful figure? I think so. The cyborg has turned out to be rather deathless. Cyborgs keep reappearing in my life as well as other people’s lives.  The cyborg remains a wily trickster figure. And, you know, they’re also kind of old-fashioned. They’re hardly up-to-the‑minute. They’re rather klutzy, a bit like R2-D2 or a pacemaker. Maybe the embodied digitality of us now is not especially well captured by the cyborg. So I’m not sure. But, yeah, I think cyborgs are still in the litter. I just think we need a giant bumptious litter whelped by a whole lot of really badass bitches — some of whom are men! Mourning Without Despair You mentioned that your current work is more focused on environmental issues. How are you thinking about the role of technology in mitigating or adapting to climate change — or fighting extractivism and extermination? There is no homogeneous socialist position on this question. I’m very pro-technology, but I belong to a crowd that is quite skeptical of the projects of what we might call the “techno-fix,” in part because of their profound immersion in technocapitalism and their disengagement from communities of practice.  Those communities may need other kinds of technologies than those promised by the techno-fix: different kinds of mortgage instruments, say, or re-engineered water systems. I’m against the kind of techno-fixes that are abstracted from place and tied up with huge amounts of technocapital. This seems to include most geoengineering projects and imaginations.  So when I see massive solar fields and wind farms I feel conflicted, because on the one hand they may be better than fracking in Monterey County — but only maybe. Because I also know where the rare earth minerals required for renewable energy technologies come from and under what conditions. We still aren’t doing the whole supply-chain analysis of our technologies. So I think we have a long way to go in socialist understanding of these matters.
To talk about surveillance online, you have to talk about how the internet works, and it seems like a fine line between being too high-level and too technically nitty-gritty. How do you teach about the internet?  Most of the time, what we're teaching is fairly 101-level. In terms of getting into the structure of the internet, we try to avoid getting in the weeds just because most people, whether they’re the librarians who we’re training for the first time or the patrons who our graduates go on to train, come into a privacy-focused educational setting feeling hopelessness and resignation. Often, they're dealing with a problem that is happening to them right now, like stalkerware or having had their identity stolen. They're already freaked out. Most people think they don't have a good sense of how technology works. They already feel stupid asking questions about it.  In LFI, we work with a trainer named Mallory Hanora, who comes from the world of  prison abolition and teaches us how to create transformative workshops. Their framework is about incorporating the experiences of the people in the room, being as nonhierarchical as possible, meeting people where they are—all these sorts of things to create an environment that people can actually learn in.  Then, LFI graduates are able to bring that kind of framework forward in their workshops with their own patrons. Usually when they teach, no matter what the focus of the lesson is, people will say during the question and answer period at the end things like, "I started using this password manager. Is this good?" Or, "My nephew told me about this thing I can download to block ads. What do you think about it?" That is fascinating to me because it confirms to me that people do care about their privacy. They are already taking steps to mitigate or manage what is revealed about them on the internet. They haven't developed a nuanced threat model for their own unique situation, but most people are already doing something. In those moments, LFI librarians have the information and facilitation tools to help patrons build on what they already know.
User Testing in Uganda I want to make sure we also talk about Tor. What does your work on the community team look like?  I was the community team lead for two years and I'm still a contributor, though not at the level I used to be as Library Freedom Project has taken up more of my time. The community team works on outreach—teaching people about Tor and getting more of an understanding of who it's for and what it does. LFP really fits into that work. But beyond LFP, Tor is a global community, so our outreach is global.  The interesting thing about Tor software development and usability research is that we don't do the surveillance stuff that some software does to make sure that it's usable for you. We don't track your clicks or how your mouse moves around the page; we're not doing A/B testing and getting analytics back. So how are we supposed to know how well Tor is working for you? For years, we got that feedback directly from the most technical people because that was who was using Tor. But the problem with that method was that it meant we were designing for the most technical people, and if only the most technical people are using Tor, then Tor doesn't actually work.  A couple of years ago, we started focusing on making the Tor Browser usable for people doing social justice movement work—water rights, reproductive justice, queer and trans issues—around the world. We wanted to focus on members of these movements in the Global South for a few reasons. One is that we didn't want to have the kind of free software project that was all white dudes in the US and Western Europe, which is what most of those projects are and, to be honest, what ours kind of was. We also thought it was important to understand the context of using Tor in places where the internet is much slower and more expensive—places where there are all kinds of environmental factors that make using it harder, even if you know how to use it technically.  Our teams started traveling to meet with community organizers who were part of political movements in different parts of the world. We spent a lot of time learning about their contexts, what kinds of work they do, and who their adversaries are. Then, we showed them the Tor Browser, conducted some user tests, and asked for feedback. That's how Tor usability work has been happening for some time now.
You’re already seeing big changes at investment banks. Even though investment banks continue to be very large in terms of their physical footprint, number of employees, and impact on the economy, the actual participants inside banks have changed a fair bit. It’s far more automated. Many of the actual operations inside an investment bank are done by computers. It’s not humans deciding to buy Apple stock; it’s computers deciding to buy Apple stock. So that job shift is already happening. Financial firms are increasingly becoming tech firms. JP Morgan Chase employs 50,000 technologists, two-thirds of which are software engineers. That’s more engineers than many big tech firms: Facebook, for example, employs about 30,000 people total. You've been in the financial industry for a little while so you’ve seen this transformation firsthand. How has the influx of technologists changed the industry? The very clubby nature of traditional financial firms like investment banks has been diluted. You’ve got a lot more geeks and nerds. You don’t see certain jokes being made. Football conversations have been replaced by conversations about restaurants or other staples of yuppie culture.
The culture has mellowed quite a bit. It’s less driven by adrenaline. It’s less loud. The value is provided not by the person yelling into the phone but by the person who’s sitting at their computer, writing the right algorithm, who needs a little bit of thoughtfulness to do that work. The old model was about driving transactional flow through sheer energy. The new model is about driving transactional flow through computers.
The only option to support his family and get them out of debt was for Owen to pick up government contract work, bouncing around federal nuclear sites. “From 2011 to 2017, I was gone. [I was] living in a hotel, going on an airplane, sleeping in a minivan,” Owen says. He would FaceTime in for the Advent calendar, to say grace at the dinner table, and even to catch the occasional dinner and a movie with Cassie, who was raising and homeschooling the children on her own. The kids called him “Daddy-in-a-box.” Working for the government, Owen was overwhelmed by the inefficiency. The bureaucracy was farcical, the technology was wildly out of date, and the cubicles made for an environment less “move fast and break things” and more Dilbert. Worse still, Owen, a privacy hound, had to give up a lot of personal information to receive a security clearance. Every place he ever lived, every relative he ever contacted, and every illicit substance he ever used were just some of the data captured in the 127 pages of his SF-86 clearance form. And when hackers breached the Office of Personnel Management in 2015, all of that information was stolen. “Damn feds lost my DNA sample… they got intel on my whole family,” Owen says.
In 2014, during his ample free time at work, Owen came across a white paper about a new blockchain called Ethereum. While Bitcoin is a distributed list of financial transactions, Ethereum is that plus a distributed list of computer states. In other words, it stores programmable ones and zeros that act as a giant, decentralized computer running public, uncensorable code. It’s like Amazon Web Services (AWS) in that users can pay to run applications on it (using its native currency, Ether) but without a central Amazon-like authority. Today, even though it is thousands of times more expensive and millions of times slower than AWS, the Ethereum distributed computer runs code simultaneously on the computers of a few thousand strangers.
Sometimes, though, gaming gets serious. Games can embody a set of assumptions, even an ideology. Playing a game about cities, for example, you can absorb assumptions about how cities are supposed to be run. A model for gaming an auction can become the basis for an entire platform economy. New rules can restructure global financial markets. Does it work? Sometimes it seems that the game where humans are self-transparent rational actors, working with complete information, is the biggest make-believe of all.
Managers have promoted “teamwork” at work for decades. More recently, gamification has become a buzzword. In every job that must be done there is an element of fun, the song goes. You find the fun and—snap!—the job’s a game. But in reality, gamification may serve less as a technology to speed chores than to destroy solidarity, coaching people to compete constantly against their natural allies—and themselves.
Something like Ellie could be useful to the military in other ways, too. To identify and help all current and former personnel with PTSD would be a massive undertaking. Estimates from the US Department of Veterans Affairs suggest that between 11 and 20 percent of the 2.7 million service members who deployed to Iraq and Afghanistan—roughly 300,000 to 540,000 people—suffer from the disorder in any given year. Of those, DARPA says that only a small fraction seek help. It’s difficult to imagine recent administrations deploying the battalions of people—therapists, trainers, outreach personnel—needed to find and care for half a million or more people with PTSD. Automation, of the kind represented by Ellie, seems to hold out the possibility of treating mental health problems at scale, or even keeping soldiers on active duty for longer periods. If successful, computerized therapy could also be applied in other circumstances where human-to-human treatment is undesirable or impractical—such as in the midst of a pandemic.
Behind this possibility lurks a larger vision, too. Though the Ellie program is in some ways crude, it seems to herald a future system that can continuously track, report, and support users’ mental health on an ongoing basis. At the time of the demo, consumer devices like the Fitbit and Apple Watch were being marketed on the basis of their round-the-clock monitoring and data-collection features for physical health—information which would yield life-improving insights and interventions, the companies behind these technologies implied. More recently, researchers affiliated with Amazon published a paper describing efforts to determine a user’s emotional state from their voice. If an Amazon Alexa divined you were upset, it could ask what was wrong—and maybe upsell you on some indulgent self-care items. Supporting mental health could be one more reason to justify the ambient collection and interpretation of vast streams of data from our bodies and behavior.
At one level, this forgiving attitude reflected the influence of Bay Area hippies who shaped tech culture from the beginning. At another, it reflected the rise of new funding models. Early venture capitalists embraced risk rather than avoiding it; they expected most of the companies they invested in to fail. The failures did not matter, so long as the few that did succeed succeeded big.
The rise of software and, later, the web, made it possible to succeed at a whole new scale. This in turn enabled investors to tolerate more and bigger losses, because when the moonshot landed, the returns were astronomical. When you found the right product-market fit, you could eat the world, fast. In the meantime, with companies appearing and disappearing so quickly, people had an interest in staying friends, in sharing expertise, in going to Burning Man together.
Important also was the work of groups like Citizen Lab. Created in 2001 by a University of Toronto researcher partly inspired by cDc, Citizen Lab enlisted computer security techniques such as auditing, threat modeling, and penetration testing in support of civil society—an approach sometimes called “digital security.” Its research put abuses of power by states and corporations on the security agenda. After the organization conjectured that private security companies were helping governments commit human rights abuses by selling them hacking tools, a hacktivist named Phineas Fisher hacked and leaked documents and source code from the offending companies, making the theoretical practical and focusing public outrage.
Unlike in the 1990s, it wasn’t enough for organizations to take technical security seriously. Security research was now being weaponized to promote forms of insecurity—helping governments crack down on dissidents, for example. Building on the earlier ideas of the anti-security movement, these watchdog hackers made clear that identifying vulnerabilities and passing the information along to the authorities was far from sufficient to improve everyone’s security—in fact, it could be actively harmful. Security for some could mean insecurity for others.
A simple act of sabotage is to violate this assumption by generating “noise” while browsing. You can do this by opening random links, so that it’s unclear which are the “true” sites you’ve visited—a process automated by Dan Schultz’s Internet Noise project, available at makeinternetnoise.com. Because your data is not only used to make assumptions about you, but about other users with similar browsing patterns, you end up interfering with the algorithm’s conclusions about an entire group of people.
Of course, the effectiveness of this tactic, like all others described here, increases when more people are using it. As the CIA’s Simple Sabotage Field Manual explains, “Acts of simple sabotage, multiplied by thousands of citizens, can be an effective weapon…[wasting] materials, manpower, and time. Occurring on a wide scale, simple sabotage will be a constant and tangible drag on…the enemy.” Attacks of this sort—where we corrupt the training data of these systems—are known as “poisoning” attacks.
When it comes to angel investors and venture capital funds, while much of their money comes from endowments, pension funds, insurance companies, and the like, one major source is “family offices.” That is a euphemism for investment funds run for one or sometimes a handful of immensely wealthy families. As the historian Tom Nicholas points out in VC: An American History, the structure of modern venture capital investment grew out of the need to formalize family offices in the 1920s. According to an April 2021 Financial Times report, there are over 7,000 family offices worldwide (a 40 percent rise from 2017 to 2019), managing about six trillion dollars in assets. The trend line has been pointing up ever since the financial crisis of 2008. And, as Crunchbase reported in 2018, family offices seem to be increasingly keen on investing in startups directly: pivoting, as it were, from contributing to VC funds to behaving like VCs in their own right. A 2020 report by Silicon Valley Bank suggests that over 90 percent of family offices prefer to get involved in the early stages of venture investing (meaning Seed or Series A).  In Silicon Valley, many family offices were founded to manage the fabulous wealth of the tech billionaires themselves (for instance the Omidyar Network, which manages investments on behalf of the family of eBay founder Pierre Omidyar)—so relatively recent money. But some money is positively ancient. The Rockefellers’ family office, Venrock, was a crucial early investor in both Intel and Apple Computers. The Bechtel family, having made a fortune in construction going all the way back to the Western Pacific Railroad, incorporated as Bechtel in San Francisco in 1889. The family is as old money as they come in San Francisco—Bohemian Club, generations of Stanford alumni, the works. (I am—fun fact—writing this essay sitting across from a building on the Stanford campus that bears the Bechtel family name.) Their family office is called The Fremont Group, which was once run by former US Secretary of State and longtime SF high society fixture George P. Shultz. Through Trinity Ventures, founded in 1986, the Bechtels also invested directly in companies like Extreme Networks, Blue Nile, and mommy-supplier extraordinaire Zulily.
One partner at a venture capital fund told me that, in his experience, VCs turn to family offices when first starting out. Big as family funds can be, the inflow of capital represented by, say, a pension fund or a university endowment is both much bigger and much more reliable. Family offices generally do not get access to what a big university endowment gets access to. The only way family offices get to play in a hot new fund is to have invested in the firm’s funds back when it was just starting out. That means that on the whole family money comes into play early in the process of fund formation.
As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.
The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others. Such unchecked power represented a threat to liberty. Therefore, just as the power of the state had to be tamed through institutional checks and balances, so too did this private power have to be contested—controlled, held to account.
It can also be hard for them to consider solutions to political problems that might be extremely low-tech. And for that reason are probably even more difficult—like the question of what happens in the future—which may not be a hundred years away or 200 years away—when robots or algorithms can do 90% of the jobs. How do you prevent that future from becoming a brutal neo-feudal nightmare? That seems to me like a political question rather than a technical one.
I think the strangest thing about being out here in the Bay Area is that the Aspy worldview has just completely saturated everything to the point that people think that everything is a technical problem that should be solved technologically. It’s a very privileged view of very smart people who just want there to be sink or swim. It’s troubling.
At a high level, computer vision algorithms work by scanning an image, piece by piece, using a collection of pattern recognition modules. Each module is designed to recognize the presence or absence of a different pattern. Revisiting our cat-detector model, some of the modules might be sensitive to sharp edges or corners and might “light up” when coming across the pointy ears of a cat. Others might be sensitive to soft, round edges, and so might light up when coming across the floppy ears of a dog. These modules are then combined to provide an overall assessment of what is in the image. If enough pointy ear modules have lit up, the system will predict the presence of a cat.  When Li began working on computer vision, most of the pattern recognition modules had to be painstakingly handcrafted by individual researchers. For computer vision to be effective at scale, it would need to become more automated. Fortunately, three new developments had emerged by the mid-2000s that would make it possible for Li to find a way forward: a database called WordNet; the ability to perform image searches on the web; and the existence of crowdworking platforms. Li joined the Princeton computer science faculty in 2007. There, she encountered Christiane Fellbaum, a linguist working in the psychology department, who introduced her to a database called WordNet. WordNet, developed by cognitive psychologist George A. Miller in the 1980s, organizes all English adjectives, nouns, verbs and adverbs into a set of "cognitive synonyms... each expressing a different concept." Think of a dictionary, where words are assembled into a hierarchical, tree-like structure instead of alphabetically. “Chair” inherits from “seat,” which inherits from “furniture,” all the way up to “physical object,” and then to the root of all nouns, “entity.”  Fellbaum told Li that her team wanted to develop a visual analog to WordNet, where a single image was assigned to each of the words in the database, but had failed due to the scale of the task. The resulting dataset was to be called ImageNet. Inspired by the effort, in early 2007 Li took on the name of the project and made it her own. Senior faculty at Princeton discouraged her from doing so. The task would be too ambitious for a junior professor, they said. When she applied for federal funding to help finance the undertaking, her proposals were rejected, with commenters saying that the project’s only redeeming feature was that she was a woman.
Nonetheless, Li forged ahead, convinced that ImageNet would change the world of computer vision research. She and her students began gathering images based on WordNet queries entered into multiple search engines. They also grabbed pictures from personal photo sharing sites like Flickr. Li would later describe how she wrote scripts to automate the collection process, using dynamic IP tricks to get around the anti-scraping safeguards put in place by various sites. Eventually, they had compiled a large number of images for each noun in WordNet.
Working in the 1950s, Benjamin was one of the first doctors to take trans people even marginally seriously, and at a vital time — the moment when, through public awareness of people like Christine Jorgensen (one of the first trans people to come out in the United States), wider society was first becoming seriously aware of trans people. While media figures argued back and forth about Jorgensen, Benjamin published The Transsexual Phenomenon in 1966, the first medical textbook about trans people ever written.
Containing case studies, life stories, diagnostic advice, and treatment approaches, The Transsexual Phenomenon became the standard medical work on trans subjects, establishing Benjamin as an authority on the matter. And it was, for its time, very advanced simply for treating trans medicine as a legitimate thing. It argued that trans people who wanted medical interventions would benefit from and deserved them, at a point when the default medical approach was “psychoanalyze them until they stop being trans.” Benjamin believed this was futile, and that for those patients for whom it was appropriate, interventions should be made available.
Blockchain is the technology that underlies bitcoins. The idea is that at each “stop” along a chain of users, a database associated with a particular coin (or component) updates to register the change of hands. The identity of each user could be cryptographically concealed, or it could be recorded transparently. Either way, the record of transactions is available to everyone along the chain, and it’s near-impossible to forge.
Blockchain is security for the age of decentralization, and it could, in theory, make it possible for companies to verify the safety, composition, and provenance of manufactured goods. Supply Chain 24/7, an industry newsletter, calls blockchain a “game-changer” that “has the potential to transform the supply chain.” IoT is a different technology that addresses a similar problem. A company somewhere along a supply chain embeds a small transmitter, like an active RFID tag, in a component, allowing a monitor to see its location and status in real time. With sensors, a company could also keep track of the component’s environment, checking on things like temperature and humidity. It sounds like a solution custom-fitted for the problem at hand: with these tiny trackers, companies could finally get the visibility they say they’re after.
It would also miss the complex changes to the internet in recent years. For one, the design of the internet has changed significantly, and not always in ways that have supported the flourishing of the wisdom of the crowd. Anil Dash has eulogized “the web we lost,” condemning the industry for “abandon[ing] core values that used to be fundamental to the web world” in pursuit of outsized financial returns. David Weinberger has characterized this process as a “paving” of the web: the vanishing of the values of openness rooted in the architecture of the internet. This is simultaneously a matter of code and norms: both Weinberger and Dash are worried about the emergence of a new generation not steeped in the practices and values of the open web.
The wisdom of the crowd’s critics also ignore the rising sophistication of those who have an interest in undermining or manipulating online discussion. Whether Russia’s development of a sophisticated state apparatus of online manipulation or the organized trolling of alt-right campaigners, the past decade has seen ever more effective coordination in misdirecting the crowd. Indeed, we can see this change in the naivety of creating open polls to solicit the opinions of the internet or setting loose a bot to train itself based on conversations on Twitter. This wasn’t always the case—the online environment is now hostile in ways that inhibit certain means of creation and collaboration.
That creates a sense of urgency when it comes to raising awareness—particularly as gerrymandering becomes more effective with ever more granular data and ever more predictive algorithms. Our democracy depends on it. Everyone knows what gentrification looks like. Community gathering places are replaced by boutiques and bank branches. Corner stores are made obsolete by VC-funded vending machine startups. Blocks that spent decades in a state of disrepair sprout Michelin-starred bistros and cocktail bars—some even trading on the allure of formerly stigmatized neighborhoods with Instagram-ready touches like fake bullet holes and 40-ounce bottles of rosé wine.
These new businesses don’t just reflect gentrification, of course. They actively drive it. Take New York City’s SoHo neighborhood: sociologist Sharon Zukin has described how art galleries and live-work spaces displaced industry from Lower Manhattan and paved the way for luxury residences for non-artists. Or Dumbo: in the late 1970s, real estate developer David Walentas came up for the idea of redeveloping a quiet corner of the Brooklyn waterfront while eating in the area’s lone gourmet restaurant.
But the additional exposure that the internet brings can also put sex workers at risk. Online platforms may provide workers with safety, convenience, and community, but they come with a danger: surveillance. In countries where sex work is criminalised, law enforcement will monitor a suspected worker’s online presence. In countries where only the client is criminalised, police will follow sex workers’ online movements in order to track down their law-breaking clients. In the United Kingdom, where sex work in itself is decriminalised, the authorities will use online surveillance to gather evidence against workers for the crime of brothel-keeping—which means simply that multiple workers work from the same building. “Rather than limit their patrol to the street,” writes Gira Grant, “vice cops search the Web for advertisements they believe offer sex for sale, contact the advertisers while posing as customers, arrange hotel meetings, and attempt to make an arrest.” In response, sex workers must go underground—even at a time when their industry has never been more public. Secret, invite-only groups on Facebook help keep conversations among sex workers away from prying eyes. There, workers exchange tips, discuss experiences, arrange meetups, and share information ranging from dodgy clients to which sex toy store offers discounts for industry professionals. On Twitter, there are glossy, client-facing accounts and anonymous, locked accounts—and a thriving support network in the DMs. Only word-of-mouth will lead you to these spaces—so unless you’re a sex worker, it’s more than likely you haven’t seen them.
An escort named Suzie tells me that she uses forums like SAAFE, but that she finds the underground social media groups much more “cohesive.” “There’s something more intimate about belonging to a network of workers who are local to me and part of a wider community too,” she explains. Molly reiterates this point. “These are community spaces,” she says. “There’s generally someone awake if you need to reach out at 3am, you know?” These spaces offer emotional support for workers dealing with a range of issues, from handling sexual violence to managing dating and relationships. They also offer friendship. Suzie says before she joined a Facebook support group, she had no sex worker friends. Now, they’re the “virtual cornerstones of my support networks.” But as with everything in the sex industry, even these secret social media communities aren’t immune to surveillance. Facebook’s algorithms have a nasty habit of flagging up all kinds of information about your activity to your wider network of “friends,” and are fond of suggesting that you add the most random of your phone contacts. “Many people use the networks via their real name accounts,” Suzie explains. She doesn’t. “I’ve seen people get into tricky situations and come close to being outed.” Still, for many workers, the risk is evidently worth it for the support these spaces provide. Sex work can be a lonely job.
In response, g0v hackers came up with a solution. The idea first originated in the g0v Slack channel: a digital map that would visualize the quantities of masks available in different pharmacies. Howard Wu, a programmer and member of g0v, noticed that many of his family and friends were sharing information in LINE groups about which convenience stores still had masks in stock, back when convenience stores were the primary places to buy masks. He built a real-time “Mask Map” which relied on crowdsourced data to display mask stock levels in different stores. Users’ geolocation data would help them find nearby stores. Since there weren’t any existing comprehensive GIS datasets of convenience stores in Taiwan, Wu used Google Maps to obtain this data. Wu’s site had roughly 550,000 visits within the first six hours.  But relying on crowdsourced data wasn’t accurate enough. Digital Minister Audrey Tang showed Wu’s work to Taiwan’s Prime Minister, who immediately understood its usefulness. The government recognized that it could improve the accuracy of such civic digital tools by providing more up-to-date data. On February 4th, two days after Wu released his digital map, the government announced the switch to selling masks from pharmacies. In a coordinated effort with Tang, the Ministry of Health and Welfare released mask inventory data at pharmacies nationwide that was updated every thirty seconds.  Wu created another version of his site with the new data—and received 830,000 hits on the first day. Soon after, using the API that Wu had built for his map, g0v hackers created dozens more digital tools to help track mask availability, from more maps to smartphone apps to LINE chatbots. A government website now lists over 130 digital products for tracking mask inventories in Taiwan, all built by civic technologists.
The maps and apps have not only served as useful tools for people trying to purchase masks, however. The government has also relied on these tools to improve its own distribution supply chain. Officials have been able to track the fluctuating numbers in different cities and provinces, which they can use to adjust mask shipments in real time. This reciprocity between the government and the grassroots technologist community has greatly benefited both parties, and Taiwan as a whole. It also stands in sharp contrast to the top-down approach of mainland China, where technological interventions to contain the virus have taken a far more authoritarian form.  Care Works There is no doubt that initiatives like the digital fence program and the g0v mask maps have contributed to Taiwan’s effective management of the pandemic. As mechanisms to help coordinate the allocation of people and resources, these digital tools have proven invaluable. But the more I read, observed, and talked to the people around me, the more persuaded I became that technology had more of a supporting role in Taiwan’s success.
The established disorder of our present era is not necessary. It exists. But it’s not necessary.  Playing Against Double Death What might some of those practices for opening up new possibilities look like? Through playful engagement with each other, we get a hint about what can still be and learn how to make it stronger. We see that in all occupations. Historically, the Greenham Common women were fabulous at this. [Eds.: The Greenham Common Women’s Peace Camp was a series of protests against nuclear weapons at a Royal Air Force base in England, beginning in 1981.] More recently, you saw it with the Dakota Access Pipeline occupation.  The degree to which people in these occupations play is a crucial part of how they generate a new political imagination, which in turn points to the kind of work that needs to be done. They open up the imagination of something that is not what [the ethnographer] Deborah Bird Rose calls “double death” — extermination, extraction, genocide.  Now, we are facing a world with all three of those things. We are facing the production of systemic homelessness. The way that flowers aren’t blooming at the right time, and so insects can’t feed their babies and can’t travel because the timing is all screwed up, is a kind of forced homelessness. It’s a kind of forced migration, in time and space.  This is also happening in the human world in spades. In regions like the Middle East and Central America, we are seeing forced displacement, some of which is climate migration. The drought in the Northern Triangle countries of Central America — Honduras, Guatemala, El Salvador — is driving people off their land.  So it’s not a humanist question. It’s a multi-kind and multi-species question.
In the Cyborg Manifesto, you use the ideas of “the homework economy” and the “integrated circuit” to explore the various ways that information technology was restructuring labor in the early 1980s to be more precarious, more global, and more feminized. Do climate change and the ecological catastrophes you’re describing change how you think about those forces?  Yes and no. The theories that I developed in that period emerged from a particular historical conjuncture. If I were mapping the integrated circuit today, it would have different parameters than the map that I made in the early 1980s. And surely the questions of immigration, exterminism, and extractivism would have to be deeply engaged. The problem of rebuilding place-based lives would have to get more attention.
And Facebook tracks everything you do. They track everything, yes, but mostly they just funnel your usage towards Facebook. More than sixty countries have Free Basics now. In a country that has Free Basics, the entire digital media ecosystem is governed by Facebook, especially for poor people. In many cases, the entire digital media ecosystem is Facebook, or some combination of Facebook and WhatsApp, which Facebook owns.
It’s true in the Philippines, Myanmar, Kenya, and Cambodia. Those are four politically fraught places where we’ve seen tremendous success by ethnic nationalist and religious nationalist groups using Facebook either to support a particular candidate in a campaign or to instigate mass violence against the other side.
A few years ago, the mantra seemed to be, “Don’t read the comments, don’t read the retweets, just accept it for what it is.” But now it’s more like, “Okay, we should read the comments. We should treat that as a problem.” What do you think people should focus on with harassment online? Think about being on systems that listen to you, or creating your own systems and spaces. Come to a place where, if you request something to be built, it’s in their purview to build it for you. What does a social media co-op look like? And how do we band together and force companies to listen? People started to react really poorly to the spread of fake news on Facebook, to a point where even internally, Facebook employees were organizing and protesting. How do we continue that, but beyond fake news? How do we say that it behooves these platforms to have a community technology team? How do we demand the things we want built for us? I don’t think change will come from legislation—I don’t think those companies would listen anyway. But there needs to be some equity exchange.
[1] The Hasso Plattner Institute of Design at Stanford, commonly known as the d.school, has helped popularize “design thinking,” particularly in tech. Atossa Araxia Abrahamian (AAA) Luxembourg is very bullish on asteroid mining. They think it’s gonna be a huge business five, ten, fifteen, twenty years from now. And they’re a tiny country that needs to get ahead in this world. Historically, one of the ways they’ve done that has been to identify emerging businesses and lure them to Luxembourg. So that’s why they’re courting Planetary Resources, a startup based outside of Seattle that’s received funding from Richard Branson and various Google executives.
In fact, many of the technologies that are developed here are being developed with an eye to a global market. I’d go as far as to say predictive policing wasn't even really for the United States, which has a high threshold for things like accountability and transparency. When predictive policing first came to American police departments, the marketing line from industry was that the departments were resource-scarce. Predictive policing, the story went, could help law enforcement agencies save money. That argument is absurd. American police departments are far from resource-scarce. But that argument wasn't for us. That argument was for police departments that really are resource-scarce. It was a sales pitch for police departments in Karachi.
But it’s not just about global markets. It’s also about global contexts. American policing functions as a research site for military innovation—the “green to blue” pipeline is bidirectional. For instance, the National Institute of Justice’s 2009 predictive policing innovation grants (which funded Chicago’s now-deprecated Strategic Subjects List, or “heat list”) seeded the development of risk assessment technologies that served as templates for military detention algorithms in Iraq and Afghanistan, and that helped support counterinsurgency operations. Similarly, social media flagging systems designed for gang policing in urban contexts were studied by DARPA for monitoring ISIS recruitment. The racially hostile relationship that American police have with vulnerablized communities—what are commonly referred to as “low-information environments”—means that those communities can function as isomorphic innovation domains for US imperial contexts. So they test policing tech domestically, in places where the police have a hostile relationship with racialized communities, in order to design war tools for similar communities overseas.
After seeing the DARPA demo, I was unsettled by the idea of an emotionally-aware technology ecosystem constantly reporting back to companies or governments about our mental states, and then trying to intervene in them. But the thing I kept coming back to most often was the avatar of Ellie, sitting in her chair with her hands folded in her lap, calmly interviewing an actual human being with a potential mental illness. As a designer and writer of video games, I know that well-crafted interactive digital characters can elicit deep emotions from players, causing changes in their mood and outlook, just as powerful works in any medium can. Until I encountered Ellie, though, I hadn’t imagined what it would mean for people to share their most private thoughts and feelings with a machine. I wondered whether this artificial interaction could actually help people change, or even heal. So, in a spirit of curiosity, I set out to create a sort of Ellie of my own.
An Algorithm for Thoughts When I began researching computerized therapy, virtual mental health care was already a booming category—and that was before the world was struck by the coronavirus. Following the outbreak of COVID-19, the possibility of inexpensive, scalable virtual mental health tools may very well become a necessity. Social isolation, unemployment, pervasive uncertainty, death—the pandemic and society’s response to it have created a wave of emotional distress while at the same time stripping millions of people of their jobs, healthcare, and access to therapy. “With the coronavirus pandemic causing unprecedented levels of stress and grief, companies offering virtual mental health care say they’re seeing a massive surge in interest,” the medical news site STAT recently reported.
Did you start studying biology as an undergraduate?  I got a scholarship that allowed me to go to Colorado College. It was a really good liberal arts school. I was there from 1962 to 1966 and I triple majored in philosophy and literature and zoology, which I regarded as branches of the same subject. They never cleanly separated. Then I got a Fulbright to go to Paris. Then I went to Yale to study cell, molecular, and developmental biology.
Did you get into politics at Yale? Or were you already political when you arrived?  The politics came before that — probably from my Colorado College days, which were influenced by the civil rights movement. But it was at Yale that several things converged. I arrived in the fall of 1967, and a lot was happening.
We’ve always been particularly interested in how our work was being received by younger people. How do they engage with Logic? Do they find it useful? To help explore these questions, and to reflect on the story of Logic thus far, Ben Tarnoff and Xiaowei Wang from Logic sat down with Jasmine Sun, Jessica Dai, and Emily Liu from Reboot. Reboot is a community for young technologists that is active on many fronts, from hosting events to running an email newsletter to publishing a print magazine. Logic has tried to do its part to advance the ruthless criticism of all that exists; we turned to our friends at Reboot to help put ourselves under the microscope. If Logic did have to come up with KPIs, we could do worse than measuring our success by the amount of criticism we get from the next generation.
Ben Tarnoff (BT): As we transition to Logic’s next chapter, we wanted to create space in this issue to reflect on the project so far: what we’ve achieved, where we’ve failed, how and why we did what we did. And we thought of you all at Reboot as ideal conversation partners for that reflection, because our projects feel like such kindred spirits. We’ve tried to make similar interventions, I think. But with important differences. And one of those differences is generational: we’re a bit older, you’re a bit younger.
The first silence is related to the second. Women, after all, were seen as having largely failed in computing until recent historians’ attempts to correct that assumption. But as it turns out, technological failure and womens’ erasure are intimately related in more than one way. When we put these facts together—our avoidance of failure, our ignoring of women in computing, and our tendency to see women’s contributions as less important—curious patterns start to emerge.
The failure of one unnamed and ignored postwar computer worker is a good place to start. In 1959, this particular computer programmer faced a very hectic year. She needed to program, operate, and test all of the computers in a major government computing center that was doing critical work. These computers didn’t just crunch numbers or automate low-level office work—they allowed the government to fulfill its duties to British citizens. Computers were beginning to control public utilities like electricity, and the massive and growing welfare state, which included the National Health Service, required complex, calculation-dense taxation systems. Though the welfare state was created by policy, it was technology that allowed it to function.
My research group has begun to practice some aspects of restorative justice in online communities in coordination with the moderators of those communities. Pre-conferencing, which involves one-on-one conversation between the mediator and different people involved in the harm, is often the first step of a restorative process. In order to get a deeper understanding of the types of harm that happen, the needs of those who are harmed, and what potential next steps could look like, we are currently conducting pre-conferencing interviews with people who have been harmed in online gaming communities, those who have been banned from certain games, and moderators.
In building a just future we cannot however rely solely on the intervention of platforms, or on restoring justice one harm at a time. Even as we work towards restoring justice right now, our long-term aim must be to transform the societies in which harm occurs. This is the work of transformative justice, which was popularized by women and trans people of color as a way to address interpersonal violence and tie it to structural and systemic forms of violence. As the organizer and educator Mariame Kaba puts it: “I am actively working towards abolition, which means that I am trying to create the necessary conditions to ensure the possibility of a world without prisons.” The future we should be working toward is one in which every single person has the skills to identify harm, hold themselves and others accountable, and work towards justice. At the same time, we must transform the social conditions, including patriarchy and racism, in which harm thrives. This kind of work leads us to fundamentally transform our relationships with one another, and it cannot be scaled or outsourced. When building a future that addresses online harm we should not seek mere alternatives to content moderation; we should work towards a world where no content moderation is needed.
The Red Deal I want to change gears to talk about the Red Deal, The Red Nation’s proposal for climate justice and decolonization. What would you say are the main pillars? Our program is influenced by the divest/reinvest strategies of Standing Rock and the Movement for Black Lives. At Standing Rock, Water Protectors called for divesting from fossil fuel industries. The Movement for Black Lives platform calls for divesting from carceral institutions and reinvesting in the things that people need to live — instead of the things that put us in jail.  The Red Deal focuses on the state itself as opposed to industry because it’s the state that keeps the extractive industries intact. Who else was at the pipeline protests? The police. What allows the criminalization of Native people? The carceral legal apparatus. What prevents colonized nations from throwing off the yoke of US dominance so they can develop? The US military. So demilitarization and carceral abolition are two main pillars of this program. We estimate that divesting from those state institutions would free up about a trillion dollars to reinvest in things like hospitals and healthcare and land that has been destroyed here, as well as in other countries that have been damaged by the US military.
We’re also using the idea of Alexandria Ocasio-Cortez and Ed Markey’s Green New Deal, which essentially argues in its legislative text that every social justice issue should become a climate justice issue. Indigenous people have long been the most confrontational arm of the environmental justice movement, but have received the least attention when it comes to actually making policy. The Red Deal says that if we’re going to imagine carbon-free economies and the end of fossil fuels, then we also have to talk about decolonization. How are we going to build wind turbines but not give the land back to Indigenous people? The Red Deal stands for a caretaking economy. If soldiers and the police are caretakers of violence, then we need to contrast those value systems with people who are caretakers of human and nonhuman life. That includes teachers, nurses, counselors, mental health experts. It also includes land defenders and Water Protectors.  We all need water and land and forests to live. But when you walk into a restaurant, who gets a discount? Military and police, who, by the way, tend to be men. That reflects a value system. Caretakers tend to be women, and caretakers of the land tend to be Indigenous. If we look at the anti-protest and anti-BDS laws [Eds.: the Boycott, Divestment, and Sanction movement is a Palestinian-led campaign “to end international support for Israel’s oppression of Palestinians”] that have gone through state governments, they criminalize caretakers. So that’s what we mean when we talk about investing in a caretaking economy that seeks to live in a correct relation with each other as human beings and nations, as well as a correct relation with the nonhuman world.
More insidiously, such narratives also serve to sanction the dominant technologies by presenting them as the only ones ever conceivable. They overlook the many possible alternatives that did not prevail, thereby producing the impression that the existing technologies are just the inevitable outcome of technical ingenuity and good sense.
If peripheral innovations like the Latin American experiments with informatics did not become mainstream, this is not because they were necessarily inferior to corporate, military, and metropolitan competitors. The reasons why some technologies live and others die are not strictly technical, but political. The Cuban model was arguably more technically sophisticated than its US counterparts. Yet some technologies are sponsored by the advertising industry, while others are constrained by a neocolonial trade embargo. Some are backed by the Pentagon, others crushed by the Vatican.
In 2018, a group of researchers at the Technical University of Dresden made a breakthrough to that end when they released DEDA: the Dot Extraction, Decoding, and Anonymization toolkit. Analyzing over a thousand printouts from over a hundred printers, they developed an algorithm to detect and decode the tracking dots of four grid patterns that were used across eighteen manufacturers. Finally, there was a way for regular people to pry the dots out of the hands of corporations and intelligence agencies, and commandeer them for ourselves. The DEDA toolkit allows anyone to anonymize documents by removing the tracking dots at the software level, actively inhibiting the process and giving non-governmental entities the ability to disrupt their printer’s surveillance mechanisms. It also enables users to hijack the MICs for their own purposes by creating user-defined secret patterns. Seemingly innocuous blank sheets of paper could now be used to convey information between any parties—not just central bankers, printer manufacturers, and intelligence agencies—so long as each party had access to the decoding key.  When I set up the toolkit on my laptop, I found that the anonymization and user-defined pattern tools worked right out of the box. It was thrilling to be able to manipulate the corporate tracking systems I had never consented to and create my own MIC. But when I attempted to decode the yellow dots on a piece of paper printed commercially from a Xerox PrimeLink C9065, DEDA came up empty. I tried another piece of paper from a smaller Xerox DocuColor printer and, again, DEDA found nothing, even after I had manually confirmed with a UV light that the page was marked with tracking codes.   Perhaps the toolkit is a victim of its own success. At the time, DEDA was a revelation. But security is not a steady state. The fanfare from the cybersecurity press that accompanied the toolkit’s release may have caught the eye of printer companies and intelligence agencies, and maybe they tweaked the encoding schemes they were using. It is also possible that the printers I tested use a MIC that hasn’t yet been catalogued by DEDA. According to the EFF, engineers employed by major manufacturers have hinted at the existence of a new generation of tracking mechanisms. It's been posited by researchers that tiny discrepancies in the spacing between words or even the kerning of letters could be used to encode information. But little is known about these alternative tracking measures aside from vague warnings by industry insiders.
The rationale behind building surveillance mechanisms into laser printers laid the foundation for more far-reaching forms of surveillance that we encounter in our devices today. The modern internet is full of invisible tracking mechanisms that, like MICs, are marketed as beneficial at best and harmless at worst—as long as you have nothing to hide. Compared with the sprawling digital behemoth of the web, yellow tracking dots may seem trivial. But the efforts by intelligence agencies to keep tabs on printed documents are a grave, if obscure, threat to our privacy. And the history of those efforts reminds us that what might at first sound like a conspiracy theory is actually true: that in the name of preventing crime, government and industry collude in secret to track us all.
The second viewpoint is that the cost of Bitcoin "mining" supports Bitcoin price. As the "mining" cost increases, Bitcoin price will rise too. However, this viewpoint is hard to hold. At any given time, the supply of Bitcoin is determined by a prespecified algorithm and has nothing to do with how much computation power (measured by hashrate or the number of Hash operations per second) is engaged in "mining". If the price of Bitcoin goes up, hashrate will be higher, but the supply of Bitcoin will not increase correspondently, and the price of Bitcoin will not be held back. As more computation power competes for a given number of new Bitcoins, the cost of “mining” (measured by the number of Hash operations required to produce a new Bitcoin) rises. Similarly, if the price of Bitcoin falls, the “mining” hashrate will be lower, but the supply of Bitcoin will not be reduced, and the price of Bitcoin will not be pushed up. Under this scenario, less computation power competes for a given number of new Bitcoins, which reduces the "mining" cost. II. Can Bitcoin futures stabilize Bitcoin price? Bitcoin’s volatility is too high for it to be an effective medium of exchange, nor is it economically feasible to develop Bitcoin-denominated financial transactions. Figure 2 shows the ratio of Bitcoin’s volatility to that of the S&P 500 index since 2011. It is also very cheap to trade the S&P 500 index funds. Therefore, the S&P 500 index funds are a better medium of exchange than Bitcoin.
Price stability is a necessary condition for Bitcoin to become an effective medium of exchange. One proposal is Bitcoin futures. On December 10 and 18, 2017, CBOE Global Markets and CME Group respectively introduced Bitcoin futures. In addition to price discovery and risk management functions, Bitcoin futures facilitate the participation of institutional investors in the Bitcoin market, which was a key driver of the sharp rise in Bitcoin prices between October and mid-December, 2017. In addition, it is straightforward to develop Bitcoin ETFs based on Bitcoin futures, which allows retail investors to acquire exposure to Bitcoin through mainstream stock exchanges rather than cryptocurrency exchanges or wallets. The Bitcoin futures of CBOE and CME play a certain role in price discovery and risk management (figure 3), but Bitcoin’s volatility does not decrease significantly (figure 1 and figure 2). In fact, seeing from the general situation in commodity futures and financial futures markets, futures trading does not necessarily lower the volatility of underlying assets. The transaction volume of Bitcoin futures is not large. This suggests that Bitcoin futures only carry out very limited risk-hedging functions, and that institutional investors’ interest in Bitcoin futures remains small. III. Feasibility of stable tokens Some practitioners are experimenting with stable tokens. There are two representative methods. The first category, represented by Tether, claims to issue a USDT token pegged 1:1 to USD and with a reserve rate of 100%. This amounts to a currency board regime. However, it is not clear whether Tether has sufficient reserves. If investors find that stable tokens such as Tether do not have sufficient reserves, a run on the currency peg will occur immediately. Indeed, Tether has become a source of systemic risk for cryptocurrency market. The second category, represented by Basecoin, is still in the state of development, claiming that it will mimic the open market operations of central banks. In order to stabilize Basecoin’s prices in USD, it fine-tunes the Basecoin supply by issuing and repaying bonds denominated in Basecoin.
I’m always surprised by the cleverness of the people outside the company who try to steal things, and the stupidity of the people inside the company who feel they can get away with things. In the first days after I found out I was pregnant, my number one pleasure was tracking my embryo’s growth on various apps. As nausea set in, my morning ritual consisted of crunching Cheerios in bed while clocking the latest developments: one week the apps told me my baby was the size of a poppy seed, the next week a pea. When only my inner circle knew that I was knocked up, these apps acted as chatty confidantes. Their girlfriendy tone assured me that everything was moving along fine.  If I scrolled far enough or swiped the wrong way, I would occasionally land on the “community” sections of these apps. A typical series of posts might look something like this:  Light spotting at 12 wks normal?Nub & Skull Theory BFP or BFN? Thoughts? Constipation! 12 Weeks and Couldn’t Hear the Heartbeat — But all is OK!**Update 7dpo test** Who else just started their TWW… 3 DPO hereVeiny boobs LOL Early miscarriage, cervix feels weird Lord help me  Most of the time, I would click away. I was mystified by the acronyms and uncomfortable gawking at their naked vulnerability. On the one hand, I felt for these women, who clearly had nowhere else to take their fears, frustrations, and disappointment. On the other hand, I was trying to enjoy my BFP (Big Fat Positive) in peace. I found that these conversations were more likely to inspire new anxieties than to assuage the ones I already had.
I got my first exposure to such “communities” through pregnancy tracking apps like Glow. But when I did Google searches for things like “round ligament pain” or “morning sickness when will it end,” I discovered another place where people were discussing pregnancy online: website forums that look like relics of the AOL era. I couldn’t fathom why anyone would spend their time poring over the comments on these sites, let alone posting them. But millions of women do, posing (and answering) every conceivable question.  BabyCenter.com claims to be the world’s number one digital parenting resource, reaching 100 million people monthly and attracting eight out of every ten new and expectant moms each month in the US. Since its publication in 1984, the blockbuster pregnancy guide What to Expect When You’re Expecting has done brisk business enlightening and terrifying parents-to-be. Today, the online home of the pregnancy-guide-turned-empire, Whattoexpect.com, has a new community post every three seconds. These sites (and a handful of others) play host to groups as broad as “birth clubs” (comprised of women due the same month) and as narrow as “40+ Expecting 8th LO” (Little One).
This extractive logic is, crucially, inseparable from the need to discipline workers and consumers. In the Marikana massacre, the mining conglomerates sent the police to shoot the protesting miners in order to force them back to work. Corporate landlords and banks use subtler means, deploying credit-scoring technologies to encourage consumers to act the right way. This is an important point: such technologies don’t merely record behavior, they shape it. As one rental platform CEO explained to me: People have adjusted themselves amazingly to… getting access to credit. The product itself has produced a better consumer. The banking system and the rules are producing the results that it wants: a consumer that is aware of its financial position, and that will take responsibility for what they have committed to.
This may remind you of the famous episode of the popular TV show Black Mirror in which people rank each other on an app after every social interaction. Lacie, the main character, desperately tries to increase her score so she can rent a property in Pelican Cove, the estate of her dreams. (She fails, and is eventually evicted.) Have credit-scoring technologies turned post-apartheid South Africa into a real-life Black Mirror? Not yet. But you may be surprised to learn that the episode was shot in Pinehurst, in the heart of Cape Town’s affluent northern suburbs. The fictional Lacie and countless real South Africans share a common fate: a simple score can prevent them from calling a place home. Previously enforced by public institutions, segregation is now big business, driven by big data.
But then why are members of certain groups considered riskier than others? This is where we need to talk about “cumulative disadvantage.” For example, some of these models make predictions on the basis of an individual’s level of education. Well, we know the education system is highly unequal. Therefore, there is cumulative disadvantage as a result of the kinds of differences in education that people have, because those differences are then used to discriminate against them.  And it’s not just education, of course. There are all sorts of factors that are subject to cumulative disadvantage. And these will continue to perpetuate discrimination unless there is a powerful actor that steps in and limits the use of certain factors in making predictions. Otherwise, the harms that are associated with cumulative disadvantage will just pile up.  How new is the practice of rational discrimination? I’m reminded of the redlining maps that government officials and banks developed in the 1930s to deny certain neighborhoods access to federally backed mortgages. These neighborhoods were predominantly Black and Latino, but the formal basis for excluding them was that they had a higher risk of default.
True enough. Look, I’m an old guy. I did statistics by hand. Statistics has been around for ages. The estimation of risk has been around for ages. And while discrimination on the basis of race may not have been based upon statistics at first, it soon was. But the nature of statistics has changed, and the nature of the technologies that use statistics have changed, in part through rapid developments in computation. That’s what we’ve got to pay attention to, especially if we want to gain control over these systems.
We sat down with Mai Ishikawa Sutton, lead organizer of DWeb Projects with the Internet Archive and cofounder and editor of COMPOST, an online zine about the digital commons, to discuss what the distributed web and DWeb are, community principles as an organizing tool, and the ways decentralization is a verb not a noun.
Could you tell us about your background and political and technical evolution? I went to UC Santa Cruz for college and was part of a program now called the Everett Program. The program focuses on training undergraduates on practical technologies—like contact databases and website building, branding, social media, things like that—and pairs them with nonprofits who have concrete technical needs. It’s a student-led, student-taught program with the goal of helping students become what we call “social justice tech entrepreneurs.” As part of the program, I went to Malaysia and worked on the technology side of a Muslim feminist organization.  After graduation, I didn’t know what the hell to do with my life, so I dove into readings about internet policy and got involved with net neutrality activism. That brought me to the Electronic Frontier Foundation. My initial role there was to support all aspects of their international advocacy work around free expression, privacy, and intellectual property. I eventually chose to work on activism against international copyright laws. At the international policy level, copyright policies are largely decided through trade agreements—Hollywood and big publishers can essentially have their copyright wishlists enacted into the national laws of countries that sign on to these opaque trade deals. Copyright raises a lot of interesting issues and questions around creativity online: How can we make sure artists are paid for their labor, and how does that determine how people engage with culture online?  At some point, I wanted to figure out how to be on the other side of this equation. Instead of arguing against the endless terrible corporate policies of multinationals, I wanted to fight for positive initiatives. I knew there had to be an alternative, a whole other approach to economic policy. So I left EFF and went to an organization doing solidarity economy advocacy called Shareable. Their advocacy covered the commons as it relates to stewarding everything from land, water, waste, and technology. My work there allowed me to explore this alternative economy: What is the commons? What is a cooperative? What is the essence of these things that people own, share, and steward together?
Finally, antitrust-style restrictions on firms might reduce problematic conflicts of interest. For example, we might limit practices of vertical integration: Amazon might be forbidden from being both a platform and a producer of its own goods and content sold on its own platform, as a way of preventing the incentive to self-deal. Indeed, in some cases we might take a conventional antitrust route, and break up big companies into smaller ones.
Civic Scale Creating public options or imposing structural limits would necessarily reduce tech industry profits. That is by design: the purpose of such measures is to prevent practices that, while they may bring some public benefits, are both risky and too difficult to manage effectively through public or private oversight.
Underwriting this arrangement are inmates and their families. GTL bears the cost of installing the infrastructure for tablet deployment and then distributes its devices to inmates free (with exceptions: in Pennsylvania inmates pay $147 for theirs), charging them for access to content. Subscriptions typically start at 99 cents per day and go up to $25 for a month, says executive director of inmate applications and hardware Brian Peters. Other fees are transactional: for example, 25 cents for text-only email, 50 cents to send messages with an attachment, and $1 for one with embedded video. Securus didn’t reply to an interview request, but access to one of its tablets commands a monthly fee of $15 to $45, while another costs an undisclosed sum plus charges for downloadable content.
This model has the signal merit of getting taxpayers off the hook. The political untenability of asking us to pay for anything that appears to cosset inmates is considered axiomatic. “The public doesn’t want to pay,” says Martin Horn, who led the Pennsylvania and New York City Departments of Correction in the late 1990s and mid-2000s, respectively. For a glimpse of the injunctive hold of this supposed truism, look no further than the disclaimer the Federal Bureau of Prisons felt obliged to issue on the first web page that comes up when you Google “TRULINCS,” acronym for its Trust Fund Limited Inmate Computer System, an email service: No taxpayer dollars are used for this service. Funding is provided entirely by the Inmate Trust Fund, which is maintained by profits from inmate purchases of commissary products, telephone services, and the fees inmates pay for using TRULINCS.
Yet Joan created the program, in the sense that she designed it and determined the logical flow of how it worked. It would not focus on matching people up through their similarities, but rather according to what they did not want. In other words, her program took strong negative feelings into account first when determining matches.
It seemed to work. In fact, Joan’s first run at computer dating was so commercially successful that she immediately changed the name of her business from the Eros Friendship Bureau to the St. James Computer Dating Service. The name change trumpeted the importance of computing to her service at a time when this sort of futuristic take on romantic match-ups could still well have been a business risk.
When a Japanese site called Niconico invented the idea of writing comments directly on top of YouTube videos in 2006, it took less than a year for a clone of the platform to appear in China. In Japanese, the system was named 弹幕 (danmaku), or “bullet curtain,” after a subgenre of hardcore shoot-em-up games in which enemies fly in formation across the screen, like the famous arcade game Galaga on steroids. Both kinds of danmaku—the games and the comments—required their audience to process an overwhelming amount of visual stimulation at high speeds.
In China, several sites seeking to clone the Niconico experience copied the feature, as well as the Japanese characters for the name, which are pronounced “danmu” in Chinese. Today, the most successful of these clones by far is Bilibili, a social video site that has become an entertainment staple for young people in China.
With a platform like Facebook, we know a lot less. Until someone leaked the Facebook guide for moderators, we actually had no idea what was considered harassment or not. And this is the guide that says black children are not a protected class, but white men are. Until these materials are leaked, it’s really hard to know what the baseline is for what companies consider harassment. They often don’t even disclose why people are removed from a platform. People have to guess: “Oh, it’s probably because of this tweet.” I was just reading a story today in BuzzFeed by Katie Notopoulos about how her Twitter account was suspended for ten days for something she tweeted in 2011. It was a joke, with the words “kill all white people.” She wasn’t even notified, “Hey, it’s because of this tweet, and this is where you broke this rule.” I think that’s the problem here. There’s just no clarity for people.
That brings us to the subject of automated content moderation. Big platforms have a lot of users, and thus a lot of content to moderate. How much of the work of detecting harassment and abuse can be outsourced to algorithms? At Wikipedia, content moderation is very human and very grassroots. It’s being performed by a community of editors. That seems unscalable in today’s Big Tech. But I think it’s actually the most scalable, because you’re letting the community do a lot of small things.
It’s difficult to say exactly why Five Star relied on old, self-maintained software. Perhaps it was simply a legacy system with high switching costs. But its proprietary nature also fit with the general culture of the people running the party. Though Five Star has presented itself as a populist movement empowered by digital tools, the reality is that Rousseau is controlled by a private firm, Casaleggio Associates, which operates with a great deal of secrecy and has exerted a significant amount of control over the movement.  A Sense of Community The Five Star Movement was founded by Beppe Grillo, a comedian famous across Italy for his pugnacious humour, and an internet entrepreneur named Gianroberto Casaleggio—the founder of Casaleggio Associates. For more than a decade, Grillo has been the movement’s figurehead. But until his death from brain cancer in 2016, Casaleggio, who was relatively unknown among most Italians, was the movement’s most powerful figure.
Casaleggio had some intensely bizarre political beliefs that seemed to verge at once on the paranoid and the utopian. He was a fan of Genghis Khan’s horseback couriers and Benito Mussolini’s radio broadcasts, and foresaw a future a few generations hence in which a total war would annihilate billions of people, leaving the remnant to govern itself by means of a worldwide internet democracy.
These kinds of violent, spectacular disasters are what the public has come to understand as a technological failure. But most technological failures, especially when dealing with the environment, are decidedly mundane. They often disproportionately affect the poor in ways that are spatially diffuse and take generations to unfold—a kind of “slow violence,” as the scholar Rob Nixon has memorably argued. Because of these characteristics, these failures remain largely invisible to those in power and difficult for the majority to fully appreciate. This makes it possible for these technologies to look like successes—until the full extent of their failure is revealed in moments of catastrophe.
The story of Mexico City’s battle against flooding offers a telling lesson for us as we face the slow-motion disaster of climate change. The danger today is that we will again fall for the promise of technological fixes peddled by Silicon Valley entrepreneurs that seem to allow us to continue with business as usual. The problem with these solutions is precisely that they so often appear to work, at least for the groups whose voices count—for now.
Unfortunately, this change occurred right as the mainframe was on its way out, in a period when smaller and more decentralized systems were becoming the norm. This meant that by the time ICL delivered the product line they had been tasked with creating, in the mid-1970s, the British government no longer wanted it, and neither did any other potential customers. As the government realized their mistake—though not the underlying sexism that had caused it—they quickly withdrew their promised support for ICL, leaving the company in the lurch and finishing off what was left of the British computer industry.
Hiding Tech’s Mistakes Stephanie Shirley’s company succeeded by taking advantage of the sexism intentionally built into the field of computing to exclude talented and capable technical women. At the same time, the rest of the British labor market discarded the most important workers of the emerging computer age, damaging the progress of every industry that used computers, the modernization projects of the public sector, and, most strikingly, the computer industry itself.
Furthermore, Brandeis, like many classical liberals, saw the marketplace itself as a machine for enforcing checks and balances. Once cut down to size, firms would face the checks and balances imposed by market competition—and through competition, firms would be driven to serve the public good, to innovate, and to operate efficiently. By making markets more competitive, Brandeisian regulation would thus reduce the risk of arbitrary power.
Around the same time, the architects of modern corporate law were trying to solve the same problem from another angle. In 1932, Adolf Berle and Gardiner Means published The Modern Corporation and Private Property, a landmark work that laid the foundation for modern corporate governance. For many modern defenders of free markets, Berle and Means are seen as architects of the idea that shareholders and other market actors could hold corporate power accountable through the “discipline” of financial markets. This would defuse the problem of corporate power and assure the economically efficient allocation of capital.
Into this stepped Press for Change (PfC), a long-running trans campaigning group in the UK that has fought for, among other things, increased access to medical treatment. Robust data showing positive outcomes for patients would seem to challenge both of the NHS’s excuses for not resourcing it: not only would this data provide certainty on the question of improved patient outcomes from surgery, but it would demonstrate the urgency and importance of providing funding in the first place. So in the mid-1990s, PfC sought to gather just such data. They launched a survey of patient experiences, both online and on paper. In other words, they used data to gather and surface marginalized people’s experiences and desires in order to challenge public policy—a pretty canonical example of what we might now call data activism.
So what happened? When I sent that question to Claire Eastwood (who organized the survey) through PfC co-founder Christine Burns, the answer came back: nothing. The data was collected, but it was never analyzed. Why? In a word, “capacity.” PfC was, in Claire’s words, “massively overloaded as the [overall] campaign expanded, and … sadly this task got overtaken by other urgent and vital work, and never made it back up the priority list.” As Burns put it, the problem wasn’t an absence of data—they had an “embarrassment of data”—but rather that PfC was “always on the edge of biting off more than we could chew,” and the survey pushed them over that edge. In theory, the data could have been used to make a stronger argument for the efficacy of surgical care. But in practice, making that argument through an analysis of the data was more than PfC had the resources to do.
Erin: Different states have different policies about this. A few years ago, California passed a law that’s supposed to protect tenants from having their data collected by these screening companies. New York just last year passed a law that bans tenant blacklists. But apparently it’s still happening in both California and New York despite these laws. More broadly, there’s not a lot of regulation or enforcement around how these companies get data. In New York, LexisNexis sends its own people to housing court to take photos of the computers with all the eviction records. Then they bring those back, standardize them, and sell the data to tenant screening companies. Really weird things like that! The surveillance you mentioned is also part of the property technology, or proptech, umbrella, which involves biometric data and all kinds of other tools and practices. We have been having a lot of conversations with other organizers about how to ensure that we don’t put data out there that can be used against tenants.
That gets to a security issue: there are housing justice groups already using EvictorBook, but it’s not publicly available yet. Can you talk about how you’re thinking about whether or not to make it public? Azad: We’ve been thinking through worst-case scenarios: how could this data be used in a way that we didn’t plan for? What if the data was hacked? The eviction data is something we have access to that most people do not. We’ve gotten it through relationships with the courts, housing boards, and tenant organizers. So the raw data itself will probably remain restricted. But the rest of the site will be public and usable.
Despite the proliferation of slick apps dedicated to fertility and pregnancy, much of the conversation still happens on old-school, web 1.0 forums like BabyCenter and What to Expect. Hectic, disorganized, and largely anonymous, these message boards recall the chatrooms of the 1990s, where you could be anyone and say anything under the cover of a screen name. While plenty of the pregnancy action has made the move to social media — where both “mommy” groups and branded pages abound — the forums maintain the upper hand in one key way. Whereas on Facebook users have to join the group in order to see the conversations, traditional message boards are far more lurkable. “Many studies show that there are way more lurkers than posters,” says Anna Wexler, a bioethics fellow at the University of Pennsylvania studying the rise of do-it-yourself medicine. “If you look at the number of posts on these forums, the number is tremendous — on some of the What to Expect ‘birth clubs,’ it’s over 500,000 posts already.” I stopped finding the forums so pitiable as soon as I got my first bad test result. When I was around ten weeks pregnant, I went in for a routine test called a nuchal translucency. Analog and old-school, the ultrasound scan measured the fluid behind the fetus’s neck to look for Down syndrome, which previous tests had indicated the baby did not have. It didn’t even occur to me to be nervous about this test, so I didn’t pay attention to the Very Bad Sign of the ultrasound tech quietly turning the monitor away from me.  I waltzed into my midwife’s office, ready to learn about breathing techniques and placenta smoothies. Instead she gently explained that the baby seemed to have excess fluid behind her neck — we learned it was a her — and that it could mean a few different things, none of them good. I was so shocked and distraught that I lost the ability to hear and process information. As soon as I got home, I descended into a Google vortex, which led to countless conflicting perspectives across multiple message boards.
Down the Rabbit Hole Once I was referred to a Maternal-Fetal Medicine practice to run more tests, I quickly understood the primary reason women take to the boards: getting a hold of a doctor is too annoying and takes too long. “If you have a question and you don’t want to wait until your doctor’s appointment and don’t want to make the call, the forums are just really immediate,” explains Wexler. Most ob-gyn offices also employ a firewall of nurses to answer more basic questions and refer the complicated ones to doctors. “I can’t directly contact my fertility doctor,” explains Aba Nduon, a psychiatrist who moderates a pregnancy subreddit. “She has a nursing team that I can email or call with questions and then wait until they’re able to get in touch with her, or you can post on a forum and get a much quicker answer.” Intended or not, the effect is to shame patients into being very selective about what issues they bring up to their medical team.  It turns out another very good reason to hit the forums is to decode the complicated answers (or opaque non-answers) that busy, impatient medical professionals actually do tell you. For every casual aside (“you’ll probably be fine”) or stern command (“stay on bedrest until the bleeding stops”), there’s an online army of amateur experts ready to explain, debunk, reassure, or raise the red flag. As Wexler points out, “Even if you have the most amazing OB in the world, just hearing from your OB that something is normal is not the same as hearing from twenty people who are going through the exact same thing as you that it’s normal and you’re okay and you’ll get through this.” While the forums may seem like a potential hotbed of misinformation, the volume of voices tends to serve as a check on bad advice. The more science-minded threads offer an unexpectedly effective form of crowdsourcing, providing both quantity — multiple users chiming in with their own experiences — and quality — laypeople who are so deeply immersed in the finer points of fertility treatments or fetal development that they can competently address even the most obscure questions. Nduon told me that she’s also part of a Facebook group for female physicians going through infertility. Compared to the laypeople equivalent, she said it’s “honestly not too dissimilar — we have a lot of the same questions because a lot of this stuff is very specialized information.”
The far greater violation, by my lights, was his assumption that I would want to have sex with this person, and his acting on that assumption—he said this was the condition under which they went home together. Moreover, that I should want to have sex with this person because they were, as he put it, “queer and cool.” The arguments that followed didn’t focus on the problematics of him assuming my desires for me. They turned on the fulcrum of why my desires weren’t somehow better. I wasn’t attracted to this person, so my ex called me a body fascist. My ex might be right. My libidinal tastes fit firmly within conventional determinations of beauty. I could, and often do, look back on this story as an ur-example of a manarchist (as they are known) weaponizing the idea of radical sexual politics in order to police the desires of others to serve his own.
And that’s all true. No one should be expected to fuck anyone. But this is complicated by the fact that sometimes our desires are worth questioning and challenging. Sometimes experimentation, while it should be conditional on consent, does require trying things we might not immediately desire in and of themselves, but as potential introductions to desiring differently. Don’t know what I want but I know how to get it.
No one says any of this explicitly, of course, which is why the problem of women in technology is thornier than shoehorning women onto all-male panels. The developers I spoke to told me about much more subtle, very likely unconscious incidents of being steered toward one specialization or another. Two different women told me about accomplished female acquaintances being encouraged to take quality assurance jobs, currently one of the least prestigious tech gigs. Ehmke told me about a friend who applied for a back-end developer position. Over the course of the interview, the job somehow morphed into a full-stack job—for which Ehmke’s friend was ultimately rejected, because she didn’t have the requisite front-end skills.
And everyone can rattle off a list of traits that supposedly makes women better front-end coders: they’re better at working with people, they’re more aesthetically inclined, they care about looks, they’re good at multitasking. None of these attributes, of course, biologically inhere to women, but it’s hard to dispute this logic when it’s reinforced throughout the workplace.
This isn’t to suggest that every piece of data needs to be public. As the Indigenous scholar Stephanie Carroll Rainie and her colleagues have explained, open data can be in tension with the rights of Indigenous peoples to govern their own data. In particular, it may cause tensions for communities that continue to experience data extraction under settler-colonial frameworks, and who are working to establish their own frameworks. When it comes to knowing net zero, the important thing is that the computational systems for monitoring emissions and carbon flows are publicly or community-owned, accessible to community members, and democratically governed. What this actually looks like on local, regional, and planetary scales needs to be established through participatory processes that will require time and trust.
Across the Binaries How do we get there? The answer is not fancy. It involves assembling a coalition for public ecological data infrastructures, drawn from several existing communities. There’s the long-standing open source software movement that could participate. There’s also the open data movement in science, and the movement for Indigenous data sovereignty. Environmentalist NGOs are tracking data about emissions, from larger projects like the Environmental Defense Fund’s methane-tracking satellites to databases like the Carbon Disclosure Project. There are a number of people across these fields who might join a movement for open and public planetary data. So why doesn’t this movement exist in a more mainstream way, or even as a common demand at climate protests?  One challenge is the fact that this is an anticipatory mobilization. We’re not mobilizing against something that’s already happened, we’re acting defensively based upon trends that are just beginning to emerge. Proprietary carbon management platforms are still in their infancy; their full impact might not be felt for many years.  Then there’s the cultural divide between the worlds of climate activism and tech activism. Organizations concerned with climate change may be inclined to see the digital as outside their core mission; after all, many people became involved in this space because they loved being outside, not because they wanted to think about algorithms. Moreover, such people may be actively opposed to technological interventions, or understandably dismiss net zero as a narrow, technocratic goal. Meanwhile, people with expertise in algorithmic justice issues might not be tracking developments in the environmental sphere. Their attention is likely devoted to a multitude of other concerns.
This turn to technological solutions for training caregivers in the face of an inadequate healthcare system is nothing new. At least since the early 1960s, when the country faced a shortage of trained nurses, computer-based education has been touted as an efficient and cost-effective way to patch holes in the nation’s disastrous healthcare infrastructure. Then, as now, the rhetoric of urgency has been paired with the logic of cost savings to make online learning and computer simulations seem indispensable.
But computerized medical education has inevitably represented complex patients through grossly simplified models. Because you can’t fit the diversity of human health experience into a software program, this education has always been oriented around notions of so-called “normal” or “typical” patients. In reality, these “typical” patients turn out to be composites of the sorts of people who hold power in society, particularly well-off white men. As a result, computerized medical education has helped to perpetuate the structural racism and sexism that has long pervaded the medical establishment, as well as our wider society.  Working under the promise that a computer could “dispense information just as effectively, sometimes moreso, than a human instructor,” students in Illinois in the 1960s began the very first experiment in computerized medical education, learning nursing fundamentals on one of the world’s earliest computer networks. Looking back to those students and their computer-based courses demonstrates what is often overlooked, and even dangerous, with techno-care, and why that matters more than ever in our algorithmic age.
There’s a contradiction here, the most fundamental contradiction in capitalism: wealth is made collectively, but owned privately. We make data together, and make it meaningful together, but its value is captured by the companies that own it, and the investors who own those companies. We find ourselves in the position of a colonized country, our resources extracted to fill faraway pockets. Wealth that belongs to the many—wealth that could help feed, educate, house, and heal people—is used to enrich the few.
The solution is to take up the template of resource nationalism, and nationalize our data reserves. This isn’t as abstract as it sounds. It would begin with the recognition, enshrined in law, that all of the data extracted within a country is the common property of everyone who lives in that country.
For many people who are teaching themselves to code, front-end work is the lowest-hanging fruit. You can “view source” on almost any web page to see how it’s made, and any number of novices have taught themselves web-styling basics by customizing WordPress themes. If you’re curious, motivated, and have access to a computer, you can, eventually, get the hang of building and styling a web page.
Which is not to say it’s easy, particularly at the professional level. A front-end developer has to hold thousands of page elements in her mind at once. Styles overwrite each other constantly, and what works on one page may be disastrous on another page connected to the same stylesheet. Front-end development is taxing, complex work, and increasingly it involves full-fledged scripting languages like JavaScript and PHP.
In exchange for assuming greater risk, venture firms expect higher returns. Its model tolerates losses—sometimes obscene ones—for a chance at grabbing an entire market or customer “mindshare” first. Since some of the companies a VC fund invests in will fail, the ones that succeed must succeed big time. The venture model requires large, disproportionate returns from a handful of investments. About 6 percent of investments generate about 60 percent of venture capital’s total returns, according to a data set covering thirty years of returns from Horsley Bridge Partners, a firm that has invested in many well-known venture funds. Indeed, the larger the fund size or amount of capital under management, the larger the expectation of a big “exit”—an IPO or acquisition by another company.
A fund that has $100 million may only be able to justify investing in companies that seem likely to result in an exit that pays out $1 billion or more. And since most VC funds are structured as ten-year partnerships, they’re not only looking for big exits—they’re looking for big exits on schedule. The VC model isn’t right for everyone. Many companies will never match the “return profile”—the rate of growth—required by venture capital. Moreover, for a startup, a venture check is only the beginning. Even though venture capital rounds are often celebrated as major milestones, they are really just entry tickets. When a company accepts venture funding, it commits itself to steep expectations for future growth. 2X a year is considered good, 3X is great, and 4X or more a year is amazing. These are challenging targets to meet, and the expectation to grow quickly can scale—but also contort—a founder’s original vision.
Diane has been in Green Bank since 2007, but her own assimilation hasn’t been entirely smooth. Besides her personality, there are cultural differences and class differences. She’s a religious Christian who says grace before every meal, while many of her neighbors worship bluegrass music and hard work. She comes from a middle-class family and isn’t afraid to assert her needs, while this town’s longtime residents are predominantly working poor with patterns of grit and self-sufficiency.
To try to bridge this gap, Diane has taken to recommending a trial visit accompanied by a three-pronged homework assignment while they’re here: Buy a copy of the local newspaper, The Pocahontas Times. Go to the local grocery/hardware/gas station and buy something, anything. Take a tour of the Green Bank Telescope.
And how does the system that you built fit into the appeals process?  It tracks all the elements of the appeal. If a veteran was in an accident, he could’ve injured his knee, his elbows, his head—so there could be different issues on appeal. He could be rated 30 percent for an elbow injury, 10 percent for a knee injury, 30 percent for the head injury. Typically, each appeal has an average of three issues. VACOLS keeps track of all the issues he’s appealing, keeps track of all the forms he has to submit during the process.  Like for the first step, he’s going to file a notice of disagreement to indicate that he disagreed with the VA’s original decision. And then there’s four or five other forms to fill out during the process, depending on how far it progresses. And all those forms are kept in the database.  Then, if he requests a hearing, it keeps track of that request and the dates, as well as any mail or evidence we get in correspondence that they sent to the VA. That gets loaded into VACOLS and tracked.  The appeal will start out at the regional office (RO) and it’ll go to the decision review officer, then it might go to the appellant’s rep at the Veterans of Foreign Wars, American Legion, whoever his rep is. The same information will go to an attorney and a judge, so there might be ten or fifteen people touching the appeal at any given point. VACOLS keeps track.  What was the process before they decided to build a technology system to handle it?  Paper. Up until about five years ago, all the claims files with all their service records and military records and everything were in these big twenty-four-inch-thick folders.  For the first twenty-five years of VACOLS, one of the main things it tracked was exactly where these big claims folders were—who physically had them in their possession. Was it with a judge, was it with an attorney, was it out in the regional office, was it with a veteran service organization? We used barcode technology to track all the claims folders. We had thousands of these moving through the building.
So that was the main purpose of VACOLS to begin with: let’s track where these folders are and who has them and the outcome of the appeal. Since then, we’ve added on module after module to schedule hearings, hold virtual hearings, and do all sorts of other things that are part of the appeals process.  Originally, the system kept track of a physical folder. Now, it’s keeping track of virtually who has the claim and what the status is and what part of the process it’s in.
The first crop of 350 students arrived at Fort Rodman in January 1965. They hailed “from the big cities and the small ones, the shut-down mining towns and the farm country” in New York, Texas, Alabama, and thirty-one other states, according to a 1966 promotional film about the program. Some of these young men may have been lured to Fort Rodman by postcards that featured an aerial photograph of the base on a sunny day, looking almost like a beach resort, and on the other side the text: “A HANDUP—NOT A HAND OUT.” One student said that the program was his last resort; a judge told him it was either Fort Rodman, or else.
Students at Fort Rodman were separated into small cohorts, with one instructor assigned to five or six students. The instructors were white college graduates, some from the Peace Corps, who had been trained on site to be “tutor-counselors” to the young men who for more than a year would make Fort Rodman their home. The tutor-counselors were expected to be mentors and to bond closely with the boys; they ate with the students, hung out in barrack-style dormitories where as many as fifty slept in bunk beds with military-cornered sheets, and played football together. This was as much a part of the students’ training as their remedial math and language courses and their regimen of office skills training, which included how to use typewriters, calculators, and keyhole-punch and data-processing machines.  In the 1966 promotional film for Fort Rodman, students seem impressed as an IBM employee shows them around a new punch-card tabulating machine. “How much time would this machine save compared to how you do ’em by hand?” a Black student in a shirt and blazer asks in the film. “Take a payroll application, for example,” the IBM instructor answers. “A payroll that might take an entire week to prepare could be done on this machine in, say, two to three hours at the most.” But if the electronic magic showcased by IBM captivated the boys at Fort Rodman, it wasn’t enough to help them develop proficiency in the skills needed to get a technical job at a company like IBM. Most of what we know about the program comes from promotional materials that reflect how the people running the program, and IBM leadership, idealistically imagined it working. Even in these sources, however, the causes of Fort Rodman’s failures are clear. Some were operational: for example, despite being designed to provide small-group, individualized attention, the program hired too few staff to meet student needs. Reports noted that students were often neglected by their instructors. Some students stopped showing up for class.
A legacy magazine run by an old man with inherited wealth, Harper’s, reveals that they plan to identify the creator of the Google Doc in an upcoming cover story by Katie Roiphe, a pundit who established her career by dismissing the existence of date rape in the New York Times, back in the 1990s. Dayna Tortorici, the young female editor who took over n+1, a literary magazine founded by four to five men, tweets that the “legacy magazine” responsible for the story should not out the Shitty Media Men list creator.
Nicole Cliffe, a writer and editor best known as the founder of the (now defunct) feminist website The Toast, retweets Tortorici, and offers to compensate any writer who pulls a piece from Harper’s. The campaign gathers momentum on Twitter. The creator of the spreadsheet, Moira Donegan, preempts Harper’s by publishing a long essay outing herself in New York magazine’s digital native “fashion and beauty” (women’s) vertical, The Cut.
In 2014, I moved into the Silent Barn, a three-story DIY space in Bushwick, Brooklyn. There were art galleries, a recording studio, a barber shop, murals on every surface, and shows every night. On one of my first nights living there, I came home to a kitchen filled with dozens of collective members, sitting on chairs and the floor, holding a meeting. It reminded me of an Occupy general assembly, but with the purpose of running an all-ages music and art venue. For four years, I stayed involved as a resident, collective member, and co-facilitator of programming. These days, however, DIY spaces like the Silent Barn are under threat. The Silent Barn shut its doors in May 2018, about one year after the shuttering of another local, long-running DIY venue, Shea Stadium. Beloved Brooklyn spaces like Death by Audio and Palisades have also shuttered in recent years. The closing of DIY spaces is nothing new. Their reliance on volunteer labor makes them vulnerable to collective burnout, while their shoestring finances make them vulnerable to bankruptcy. Moreover, one of their defining characteristics is often their semi-legal status, which exposes them to pressure from the authorities. For these reasons, the announcement of another closure never comes as too much of a surprise. Still, it seems that DIY venues in big cities face more challenges than in previous generations: namely the rising price of urban real estate, and the complex relationship that exists between art spaces and gentrification.
But there’s another factor fueling the crisis of DIY: Facebook. To be involved in a local music community today means maintaining an inextricable reliance on Facebook events, and Facebook-owned Instagram, for promotion. Further, some DIY spaces have become dependent on Facebook groups for everything from connecting with the public to hosting internal organizing conversations. Across the music world, digital platforms are reshaping the ways that community forms around music—and, in the case of Facebook, exacerbating the significant obstacles that DIY spaces already face. A platform that claims to bring people closer together is helping derail and destroy some of the few remaining places that actually do.
The tech industry typically speaks the language of engineering and finance. Increasingly, however, its leaders are coming to realize that they may need to become conversant in other idioms as well: the social and the political. And the social and the political, as our writers make clear, are the territories where tech’s particular configurations of bigness and smallness must ultimately be understood.
Social media is the site of much hatred and delusion. It is also the soil where social movements can take root. Hashtags enable individual incidents of injustice to go viral, revealing systemic patterns. Yet that virality is made possible by monopoly—it requires one Facebook, one YouTube, one Twitter. And these large concentrations of unaccountable private power endanger the basic premise of democracy—the idea, often invoked but rarely attempted, that the whole of the people should determine how society is run.
Therefore, Setién Quesada and his colleague argued, publication counts did not conclusively determine the “productivity” of authors, any more than declining citation counts indicated the “obsolescence” of publications. Cuban libraries shouldn’t rely on these metrics to make such consequential decisions as choosing which materials to discard. Traditional informatics was incompatible with revolutionary librarianship because, by treating historically contingent regularities as immutable laws, it tended to perpetuate existing social inequalities.
Cuban information scientists didn’t just critique the limitations of traditional informatics, however. They also advanced a more critical approach to mathematical modeling, one that emphasized the social complexity and the historical contingency of informational regularities. In the 1980s, when Cuban libraries were beginning to adopt digital computers, Setién Quesada was tasked with developing a mathematical model of library activity, based on statistical data, for the purpose of economic planning. But he was dissatisfied with existing models of the “intensity” and “effectiveness” of library activity, devised by Soviet and US information scientists. (In the discussion below, I include mathematical explanations inside parentheses for interested readers, following Setién Quesada’s own terminology and notation.)  Soviet information scientists computed the “coefficient of intensity” of library activity by multiplying the “index of circulation” (the number of borrowings m divided by the number of potential readers N) by the “index of rotation” (the number of borrowings m divided by the total volume of holdings f). Meanwhile, US information scientists computed the “measure of effectiveness” of libraries, combining the index of circulation with an “index of capture” (the number of actual library readers n divided by the number of potential readers N). In contrast to these two approaches, Setién Quesada proposed an alternative “Cuban model,” which evaluated what he called the “behavior of Cuban public libraries”: “Coefficient of intensity” (Soviet authors) “Measure of effectiveness” (US authors) “Cuban model” Setién Quesada argued that “the Cuban model is more complete.” It included many more variables, all of which he considered important. For instance, the Cuban model included an “index of communication” (based on the number l of readers who use the archive), while the Soviet and US models “do not express the precise level of the author-reader social communication that happens in libraries.” Moreover, those other models “do not consider the role of the librarian in the development of the activity.” For Setién Quesada, the librarians, “together with the readers, constitute the main active agents involved in the development of this activity.” Hence in the Cuban model, every variable was adjusted relative to the number of librarians (incorporated into the adjusted variables denoted by a vinculum). Finally, the other models “do not offer an index that synthesizes the comparative behavior of places and periods.” By contrast, the Cuban model sought to facilitate comparisons of different libraries and time periods (each represented by the subscript i).
How Are You Going to Pay for It? Nature, Raymond Williams once said, is the most complex word in the English language. But I’ve come to think that “natural” mostly means “freely given.” Nature offers the “free services” on which human life depends. More generally, nature describes what we take for granted, what we expect to happen of its own accord. From “natural birth” to “natural beauty,” nature hides a lot of work done behind the scenes. As the scholar Merve Emre reminds us, “all reproduction, even reproduction that appears ‘natural,’ is assisted.” Emre is concerned with human reproduction, but it holds just as true for the reproduction of nature itself.  We can no longer take the reproduction of our world for granted, or assume that the work of nature will take place automatically. Reproducing life on Earth will require a great deal more assistance from us, in our simultaneously extraordinary and limited capacities as a single species on a planet of millions. It will also require a great deal more recognition of the assistance provided by all those other species. What the feminist theorist Sophie Lewis calls “full surrogacy” — a call to distribute labor more broadly, to cultivate reciprocal practices of kinship and care — is as applicable to our nonhuman relationships as to our human ones.  While we may be able to perform some work on nature’s behalf in order to stabilize our biosphere, however, the expense will be enormous. Indeed, the biggest barrier to developing substitutes for certain ecological services may turn out to be cost.  The ecologist John Avise observed that the true lesson of Biosphere 2 was an economic one. In the late twentieth century, economists had tried to estimate the value of Earth’s freely provided services, but had usually stumbled over the technical difficulties of doing so. Biosphere 2 made it possible to construct “a more explicit ledger,” Avise wrote. All told, it had cost over $150 million to keep eight humans alive for two years. As Avise pointed out, “if we were being charged, the total invoice for all Earthospherians would come to an astronomical three quintillion dollars for the current generation alone!” Replacing human labor with machines usually saves money. Replacing the work of nature with machines or human labor is the opposite: it makes what was free expensive.
This means that substitution is rarely economical. In China’s Hanyuan County, for example, where pesticides have wiped out many bee colonies, human workers have subbed in, using feather dusters to pollinate pear trees by hand. But human pollination is only viable in Hanyuan because it’s cheaper than renting beehives. In a system (capitalism) that aims to keep costs down above all else, the cost of human labor has to be approaching zero for it to compete with nature’s gifts.  So as we ask who, or what, will do the work of nature, we should also ask another question: Who will pay for it? Earthly survival will require new ways of organizing not only our social and technological relationships, but our economic ones. As Biosphere 2 demonstrates, filling in for the work of nature is unlikely to be a profitable enterprise. Capitalism is unlikely to pay the extra costs. The question of what can replace it may be the biggest substitution problem of all.
Could you talk about the connection between genetic determinism and disease likelihood, because one of the things that you mentioned in your papers is “just-so” evolutionary explanations. If you get a high likelihood of a disease on 23andMe, are you just doomed forever? What is a just-so evolutionary explanation? Are you familiar with polygenic risk scores? They’re super interesting. That’s what’s hot right now in our field. They are these algorithms or heuristics that we can use to predict the potential for what we call the pathogenicity of a mutation. We use them to predict whether a mutation is going to be cancerous, or cause heart disease or something like that.  That predictive quality of genomics is something that a lot of people see as the Holy Grail that we’ve been searching for: the difference between predictive medicine and reactive medicine, and that’s definitely where we want to go. Especially with common, complex disease. But when you train every single algorithm on white people’s data, you bias everything, and so none of these polygenic risk scores work in populations that are not white.  Then there’s a second part of it. And this gets to the kind of “just-so” world of it. If you say a mutation is cancerous or pathogenic based on a correlative basis, and you don’t have causative data to show that, that’s problematic, right? You have no proof except the P value, you have no proof except some statistical phenomenon or correlation or racist-ass narrative. And so that has just played out through the field of population genetics forever. There are just all these levels of inaccuracy that get baked into these things and they become real.  One of the most famous examples of this one that I was happy to shoot down was this one that is called the thrifty gene theory. The thrifty gene theory states that Polynesian and Pacific Islander people today have really high rates of obesity and type 2 diabetes because of our history as voyaging people, because if you had these mutations that predispose you to hypercaloric storage on these journeys, it was an advantage. But then once modernity hits, you develop type 2 diabetes and obesity, and it’s because of our genes.
It’s just so racist because it’s like—no, maybe the reason why we have high rates of obesity is because you took away our access to reefs, to fishing, hunting rights and all these other things, and then you replaced our highly nutritious diet of poi and fish with spam, white rice, and soy sauce. Why are you blaming this on innateness and our evolutionary history? You’re also discrediting our accomplishments as probably the greatest seafaring people in human history.
If you're working in Seattle for Amazon and you're good at your job and you want to leave your job tomorrow, you have far fewer opportunities. Where are you going to go, Microsoft? There's not nearly as much mobility. So I think a big part of the reason we have less organizing is that people are more afraid to jeopardize their jobs. If you want to stay in the Northwest, you keep your head down.
Statistically it is also more likely that an Amazon employee will have a family than a Google employee. So that’s another factor that makes people more risk-averse. Why should they do something that would potentially jeopardize their job? Particularly when it has a low chance of success?  As you pointed out, one of the reasons that the organizing efforts within Amazon have received so much media attention is because the media is fascinated by Amazon. There have been a spate of stories looking critically at Amazon’s market power, partnerships with law enforcement, labor conditions in its warehouses, and so on. Amazon also has prominent critics in national politics like Bernie Sanders.  How are these kinds of criticisms perceived from the inside? How do people respond to that sort of thing?  I think your question kind of misses the forest for the trees. For most people at Amazon, glancing at the Apple News feed on their iPhone is about as much of the discourse as they consume. They don’t care about the news. It doesn't contribute anything to their life. There are colleagues I'm friends with who don't really know who ran for president. They figure it's all going to be the same anyway, so why bother.
Taking venture funding can also involve surrendering a certain amount of control, although the way that venture firms exert influence varies widely. At an early-stage firm like the one where I work, we engage in so-called “soft advising.” By contrast, venture firms that invest at a later stage usually hold seats on the company’s board, and can therefore exercise “hard control.” Sometimes this control is especially stark. In an earlier era, as many as 50 percent of original founders were thrown out in favor of professional management, according to an interview with Sequoia Capital founder Don Valentine. For the past decade, however, there has been a trend toward letting founders keep more power. The Facebook IPO set a new precedent by enabling Mark Zuckerberg to own most of the voting shares. Afterwards, venture firms marketed themselves as founder-friendly so as not to miss out on deals. But more recently, in the wake of the Uber crisis around Travis Kalanick, there is now some discussion that the industry has overcorrected toward too much founder control.
The Masters of the Masters of the Universe To a startup founder seeking financing, venture capitalists might look like masters of the universe. But they answer to higher masters in the form of “limited partners.” These are the masters of the masters of the universe—venture capital’s customers, who supply most of the capital for a firm’s different funds.
Rather, big data describes a particular way of acquiring and organizing information that is increasingly indispensable to the economy as a whole. When you think about big data, you shouldn’t just think about Google and Facebook; you should think about manufacturing and retail and logistics and healthcare. You should think about pretty much everything.
Understanding big data, then, is crucial for understanding what capitalism currently is and what it is becoming—and how we might transform it. What Makes Data Big? As long as capitalism has existed, data has helped it grow. The boss watches how workers work, and rearranges them to be more efficient—this is a good example of how surveillance generates information that’s used to improve productivity. In the early twentieth century, Frederick Winslow Taylor made systematic surveillance of the productive process a key part of “scientific management,” a set of widely influential ideas about how to increase industrial efficiency.
Yet regardless of the critics, the belief in the wisdom of the crowd framed the design of an entire generation of social platforms. Digg and Reddit—both empowered by a system of upvotes and downvotes for sharing links—surfaced the best new things on the web. Amazon ratings helped consumers sort through a long inventory of products to find the best one. Wikis proliferated as a means of coordination and collaboration for a whole range of different tasks. Anonymous represented an occasionally scary but generative model for distributed political participation. Twitter—founded in 2006—was celebrated as a democratizing force for protest and government accountability.
Intelligence Failure The platforms inspired by the “wisdom of the crowd” represented an experiment. They tested the hypothesis that large groups of people can self-organize to produce knowledge effectively and ultimately arrive at positive outcomes. In recent years, however, a number of underlying assumptions in this framework have been challenged, as these platforms have increasingly produced outcomes quite opposite to what their designers had in mind. With the benefit of hindsight, we can start to diagnose why. In particular, there have been four major “divergences” between how the vision of the wisdom of the crowd optimistically predicted people would act online and how they actually behaved.
But the rise and fall of gripe sites are an important chapter in the history of the internet. Gripe sites were far more than a place to complain. Rather, they offered a lively, anonymous outlet for consumers and workers to criticize corporate power, and even to organize against it. As a result, they faced an onslaught of attacks from companies. Gripe sites were the flashpoint for intense legal and regulatory battles—battles that boiled down to a confrontation between two conflicting visions of the internet’s purpose. Who owns the internet, and what is it for? This was the core question in the war over gripe sites, and one that remains no less urgent today.
Talking Shit In 2004 there were around 7,800 .com websites that included company names and derisive slang verbs like “sucks” or “blows.” They were so popular in the early 2000s that Forbes even published several annual “Top Corporate Hate Websites” articles, ranking gripe sites on criteria like “ease of use, frequency of updates, number of posts, hostility level, relevance, and entertainment value.” Where did all these sites come from? Gripe site founders were often motivated by specific grievances. The creator of WalMart-Blows.com said he created it because he was “pissed off at Wal-Mart… for their crappy customer service and for treating their employees like s–t.” Bradley Jones ran a Radio Shack for seven years and launched his gripe site, RadioShackSucks.com, after suing the company for refusing to renew his contract and opening up a competing franchise down the street from his location.
No Bosses, No Knife Missiles I want to switch gears and ask you about labor organizing and union busting at NPM. I found a GitHub gist that you published in 2015, titled “A Feminist Hacker's Guide to Labor Organizing,” a year or so before Trump’s election sparked the current wave of white-collar tech worker organizing. In 2015, it was a national press story that Googlers were anonymously sharing their salary data in a spreadsheet, and #TechWontBuildIt was still years away. Do you remember making that gist? Wow, yeah, I do remember that. That was after a conversation I was having with someone in a user group. I wrote it down, sent it to the list for the group, and haven't looked at it in years. I wonder if there’s anything I would disagree with now.  It's pretty solid! Can you talk about why you were thinking about tech worker organizing in 2015?  Before people were talking about Trump and how technology would be used under his administration, I was concerned about power differentials within tech: the ways that tech companies drive developers to burn out; the fact that Silicon Valley companies saw us in Portland as a source of cheaper labor; the fact that you can't make your boss stop being racist, but you can create consequences for your boss being racist.  I wrote that gist because we were talking in the user group about all the things that hadn't helped us. The employee resource groups that companies created to try to make us feel heard, HR, all these trainings that were like, "Here's your bias," and then everyone says, "Yup," because bias does not go away when you learn about the existence of the bias.  There was this real frustration about how things weren't changing. We had been in the industry long enough to have seen some efforts that went nowhere, and not because we didn't try very hard. We became educated about a lot of things, and then we told other people, and then they still didn't do what we needed them to do. I just got to this place where I didn’t want to keep nicely asking: “Please stop being racist, please stop being sexist.” I wanted to do something that we hadn't tried yet.
Labor organizing is a solution. It helps us leverage what we have, which is people—multiples of people—against these institutions. But it requires that workers are informed and talking to each other. In many ways, it’s like organizing an open source project: we have people, we have a need, how do we share what we know with each other, how do we build something together? How did that start at NPM?  My organizing at NPM happened by accident. Right before I joined, the company brought in this CEO who was the stereotypical guy that you hire so that the company can finally make money for the investors. But the company had attracted people whose ideals were at odds with just cashing out at any cost. There had also been unaddressed burnout issues before the CEO joined, and he made them worse because he was so terrible to the people who had put in so much work to get NPM to that point. Yet we were encouraged to talk about it all: about burnout and retention and the company's new focus.
Logistics and Translations Each of these three labor forces—the internal team, the crowdworkers, and the flaggers—is an answer to the problem of moderation at scale. Each provides a way for either a few people to do a great deal with limited resources, or for a lot of people to each do a little, together.
The challenge of content moderation, then, is as much about the coordination of work as it is about making judgments. What the press or disgruntled users might see as mistaken or hypocritical might be the result of slippage between these divisions of labor—between what is allowed and what is flagged; between how a policy is set and how it is conveyed to a fluctuating population of crowdworkers; between how a violation is understood in one cultural climate and how it is understood in another; between what does trigger a complaint and what should.
So in order to be ethical, in order to be moral, in order to be decent, in order to be kind, in order to have a society that’s functional, in order to even tell if your technology is working well or not, you have to grant a specialness to that thing we call a person. And that’s what I mean by humanism.
Virtual Reality as a Medium GW: I love that. Now, I want to turn towards how these ideas relate to your work on virtual reality. As you know, I’ve never experienced virtual reality myself. But you’ve told me how much you’ve learned about human sensory perception from working on virtual reality. And I wonder whether an important part of the goal for you is not just to build better technology, but rather to learn more about what it means to be human and how we can embrace it more fully.
EJ expresses a variation on a familiar techno-utopian theme: networked digital technology will destroy the gap between bodies separated by geography, nationality, gender, class, and age. But this new proximity would come on very specific terms: the future will have arrived once the female can reach out to manipulate the male for his sexual pleasure, not for her own.
Although the company promoted the device using scientifistic jargon that suggested a devotion to achieving some absolute standard of fidelity to the haptic real of sex, the RealTouch Interactive produced instead a particular, heterosexual male fantasy of female sexuality—one where the female body was reduced to particular configurations of effects on the male sex organ. AEBN’s promise to recreate the real through a computer simulation was betrayed by the hard-coded materiality of interface—by the machine’s inability to send sensations back from the male to the female he was supposedly having sex with. The simulation, bound up with patriarchal ideologies of sex work, only stretched so far: a girl in Romania could reach out and touch the male, but the technology ensured that she couldn’t feel what she touched.
That’s supposed to help correct gendered pay discrepancies. But it doesn’t really work, because there are all sorts of escape hatches built in. Salary bands only cover your salary. There’s lots of other ways that people get paid. As we discussed, talent acquisition is one of them. Talent acquisition gives companies a way to pay a premium to people who have more social capital. But that’s not the only way that people are rewarded unequally. There’s also the sign-on bonus. The sign-on bonus in Silicon Valley today can easily be a hundred thousand dollars. Even for somebody coming off their first job, or maybe even right out of school, it can be upwards of fifty thousand dollars. And the recruiters have a lot of leeway in setting that number. Then there’s your annual bonus, which is a percentage of your salary at most companies. Finally, there’s your stock-based compensation.
When you take an offer at a company, you’re given either stock options or grants of shares in the company. Those options or grants vest over a four-year schedule. And there’s really no restriction on how high that can go. So for a lot of people, a majority of total compensation comes from stock. Salary typically tops out at around $200,000 or $250,000 at a big company. But it wouldn’t be surprising to be given another $100,000 in stock grants. If you’re joining a company early on, that stock, by the time you’re done vesting it, could be more like a million dollars.
Yet somehow, people have always found plenty to say. And much of what they’ve said, using one technology or another, has been dirty. Indeed, as soon as humans build new tools for transmitting words, sounds, and images, they start using those tools to get each other off. From erotic daguerreotypes to Skinemax, “blue films” to phone sex, successive generations have shown extraordinary resourcefulness in unlocking the sexual potential of each new technology.
The internet, however, marked a significant advance. It’s hard to imagine a more accommodating medium for human sexuality. Not only is it infinite in its form—its packets can carry anything that can be encoded as information, from text to video to VR—but it’s limitless in its content, since that content can so easily be created and circulated by users. This latter aspect has always been a defining feature of the internet, ever since its earliest incarnation as a military research network called ARPANET. In contrast to something like television, where a few people produce the content and the rest of us consume it, the internet is both produced and consumed by its users. It is, in a very real sense, a group effort.
I’m thinking of Alyssa Battistoni’s piece for us on Biosphere 2, or Miriam Posner’s piece for us on supply chain software. Those pieces have a normative and critical dimension, but they’re mostly trying to describe how a system works. AB: There’s always an argument you’re making implicitly along the way, just by virtue of the facts that you marshal and the way that you organize them. All of those choices are motivated. And you want to be in charge, as a writer and an editor, of the effects those choices have on the reader. Even a piece that, on the surface, may seem purely descriptive can make a very serious argument about the way the world is ordered. Every piece is an opinion piece to a certain extent.
JF: As editors, how do you motivate writers to make that journey? I sometimes feel like you have to play the role of coach, cheerleader, and psychiatrist all at the same time. I know that you spent a lot of time having conversations with people even before they had something to pitch—just to hear about what they were working on, and plant the seeds for future pieces.
The real-life Anne shared many of the same talents and characteristics as the fictional one. She was also young, white, and had technical skills. The fictional Anne enjoys an exciting career in a growing field, as she proves her knack for technical work, laboring with quiet diligence each day on a secret, high-tech airplane project. She eventually shows up her male superiors by figuring out a critical engineering error that was holding up the project. While they flounder, trying desperately to figure out the flaw in their design, she hesitantly points it out, afraid of embarrassing them. In the process, she not only saves the project but she wins the heart of the male coworker she had her eye on—and allows him to take credit for her breakthrough.
As women, neither the real Anne nor the imaginary one was making an unusual decision to put their career behind their other life goals. In fact, each was making the socially expected, and strongly encouraged, choice. In the book, Anne stays in the workforce after marriage, but only after being admonished that she must put her husband’s career first. Even in fantasies, women weren’t allowed to think that their careers could come first—the best Anne might do was to try to juggle her work, her family, and her husband’s needs, knowing that if a ball needed to drop, it would be her career.
Another way to use the analytic suite is to paint a detailed picture of the population of interest in an area. One officer explained this: The big thing that Palantir offers is a mapping system. So, you could draw out a section of [his division] and say, “Okay, give me the parolees that live in this area that are known for stealing cars” or whatever [is] your problem… It’s going to map out that information for you … give you their employment data, what their conditions are, who they’re staying with, photos of their tattoos, and, of course, their mugshot. [And it will show] if that report has [a] sex offender or has a violent crime offender or has a gang offender. Some are in GPS, so they have the ankle bracelet, and … we have a separate GPS tracker for that.
A Palantir software engineer spoke of the gang unit monitoring entire networks of people: “Huge, huge network. They’re going to maintain this whole entire network and all the information about it within Palantir.” Palantir, one sergeant explained, is also an “operational game changer”: it gives him the data he needs to protect his officers’ safety by, for instance, locking down a neighborhood and positioning an airship overhead while law enforcement conducts a search. Of course, this situational awareness made possible by Palantir can ratchet up officers’ sense of danger and escalate an already tense situation. Such platforms provide an unprecedented number of data points supporting the “danger imperative,” or the cultural frame officers are socialized into, which encourages them to believe that they may face lethal violence at a moment’s notice.
For the next several weeks, I deliberately avoided opening my feedback summaries. I stocked my vehicle with water bottles, breakfast bars and miscellaneous mini candies to inspire riders to smash that fifth star. I developed a borderline-obsessive vacuuming habit and upped my car-wash game from twice a week to every other day. I experimented with different air-fresheners and radio stations. I drove, and I drove, and I drove.
Aggressively Managing Freedom The language of choice, freedom, and autonomy saturate discussions of ride hailing. “On-demand companies are pointing the way to a more promising future, where people have more freedom to choose when and where they work,” Travis Kalanick, the founder and former CEO of Uber, wrote in October 2015. “Put simply,” he continued, “the future of work is about independence and flexibility.” In a certain sense, Kalanick is right. Unlike employees in a spatially-fixed worksite (the factory, the office, the distribution center), ride-hailing drivers are technically free to choose when they work, where they work, and for how long. They are liberated from the constraining rhythms of 9-to-5 employment or shift work. But that apparent freedom poses a unique challenge to the platforms’ need to provide reliable, “on-demand” service to its riders—and so a driver’s freedom has to be aggressively, if subtly, managed. One of the main ways these companies have sought to do this is through the use of gamification. Gamification is “the Silicon Valley buzzword du jour,” the tech researcher PJ Rey recently observed. Simply defined, it is the use of game elements—point scoring, levels, competition with others, measurable evidence of accomplishment, ratings, and rules of play— in non-game contexts. Games deliver an instantaneous, visceral experience of success and reward, and they are increasingly used in the workplace to promote emotional engagement with the work process, increase workers’ psychological investment in completing otherwise uninspiring tasks, and to influence, or “nudge,” workers’ behavior. This is what my weekly feedback summary, my starred ratings, and other gamified features of the Lyft app did. Gamification also serves the useful function of redirecting conflict away from capital, as workers become consumed with the more urgent task of beating the game.
1/ As we close this issue, COVID-19 case numbers are surging across the European Union, and if they are not yet as high in North America, it seems to be mostly for a lack of tests. Oil prices are plunging, the Dow Jones is plunging, and Ted Cruz is in voluntary self-quarantine. New York State prisoners are making hand sanitizer for $0.65 per hour. Passengers are disembarking from the Diamond Princess into the Port of Oakland.
For months, prominent figures in the tech industry have been warning that it will get worse before it gets better. In February, Recode reported that the venture capital firm Andreesen Horowitz was already on high alert, canceling employee travel to China and banning handshakes in the office. In early March, Sequoia Capital wrote a memo warning partners that coronavirus would be the “black swan” of 2020; they urged startups to prepare for leaner times.

The text of all the articles from Logic Magazine issues 1-18, in a prompt/response format, in JSONL.

Downloads last month
1
Edit dataset card