prompt
stringlengths 300
3k
| completion
stringlengths 300
3.12k
|
---|---|
In a broader sense, hating COBOL was—and is—part of a struggle between consolidating and protecting computer programmers’ professional prestige on the one hand, and making programming less opaque and more accessible on the other. There’s an old joke among programmers: “If it was hard to write, it should be hard to read.” In other words, if your code is easy to understand, maybe you and your skills aren’t all that unique or valuable. If management thinks the tools you use and the code you write could be easily learned by anyone, you are eminently replaceable. The fear of this existential threat to computing expertise has become so ingrained in the field that many people don’t even see the preference for complex languages for what it is: an attempt to protect one’s status by favoring tools that gate-keep rather than those that assist newcomers. As one contemporary programmer, who works mainly in C++ and Java at IBM, told me, “Every new programming language that comes out that makes things simpler in some way is usually made fun of by some contingent of existing programmers as making programming too easy—or they say it’s not a ‘real language.’” Because Java, for example, included automatic memory management, it was seen as a less robust language, and the people who programmed in it were sometimes considered inferior programmers. “It's been going on forever,” said this programmer, who has been working in the field for close to thirty years. “It's about gatekeeping, and keeping one’s prestige and importance in the face of technological advancements that make it easier to be replaced by new people with easier to use tools.” Gatekeeping is not only done by people and institutions; it’s written into programming languages themselves. | In a field that has elevated boy geniuses and rockstar coders, obscure hacks and complex black-boxed algorithms, it’s perhaps no wonder that a committee-designed language meant to be easier to learn and use—and which was created by a team that included multiple women in positions of authority—would be held in low esteem. But modern computing has started to become undone, and to undo other parts of our societies, through the field’s high opinion of itself, and through the way that it concentrates power into the hands of programmers who mistake social, political, and economic problems for technical ones, often with disastrous results. |
This vision was partially fulfilled: by the mid-1980s, socialist Bulgaria was producing up to 47 percent of all computer hardware within the Eastern Bloc, from Berlin to Vladivostok. But the country still suffered from negligible growth rates and low worker productivity, in part due to the party’s inability to implement the technology its factories were producing. State-produced hardware, computers, and numerically-controlled machines often languished unused due to a shortage of necessary software. As this problem played out over the course of the 1980s, the party vested its hopes in a mass education effort to transform what turned out to be the last children of socialism into the first electronic generation. These kids would be trained to create the software that would allow the party to automate all it dreamed of automating—from chemical production to managing pension databases. Beginning in 1983, Bulgarian children as young as twelve were enrolled in a state-run program of technical tinkering. High schools and universities were transformed into laboratories of the future and factories for the regime. While learning BASIC, Bulgarian children were to advance themselves as intellectual laborers and truly creative citizens of a newly scientific socialist world, in order to become the future governors of much more complex production and social processes. | In reality, however, the 1980s generation of Bulgarian children found themselves becoming cogs in an economy that continued to suffer shortages, bottlenecks, and scarcity—all of which contributed to the collapse of the communist regime in 1989. When that happened, the technological skills and entrepreneurial desires the state had cultivated in its children were rechanneled into viruses and the first software enterprises of democratic Bulgaria. Hackers like Dark Avenger were thus the most notorious product of a failed political and cultural experiment with a long afterlife. The death of the party’s dream pushed much of the 1980s generation into an ideology that was fiercely opposed to socialism, but still wildly utopian. Many of these kids eventually emigrated to Silicon Valley and other hubs in the global information economy, bringing with them a strongly capitalist version of the communist’s dream: liberation of the human spirit through technology. |
In November 2016, the British tabloid The Daily Mail published some sensational news: escorts use the Internet just like the rest of us! The paper reported that sex workers had set up a website to rate their clients. With the air of breaking a scandal, journalist Dave Burke described sex workers “comparing notes on clients” and “getting advice on pricing” from each other. | This information was far from new. The online forum in question, SAAFE, had been created by escorts for escorts in 2003. And as Burke himself admitted, SAAFE serves a purpose well beyond rating johns on a ten-point scale. SAAFE stands for Safety and Advice For Escorts, and members use it to share life-saving information about dangerous or unscrupulous clients. |
The role of these systems is not just to reproduce inequalities, but to naturalize them. Capitalist difference-making has always required a substantial amount of ideological labor to sustain it. For hundreds of years, philosophers and priests and scientists and statesmen have had to keep saying, over and over, that some people really are less human than others — that some people deserve to have their land taken, or their freedom, or their bodies ruled over or used up, or their lives or labor devalued. These ideas do not sprout and spread spontaneously. They must be very deliberately transmitted over time and space, across generations and continents. They must be taught in schools and churches, embodied in laws and practices, enforced in the home and on the street. | It takes a lot of work. Machine learning systems help automate that work. They leverage the supposed authority and neutrality of computers to make the differences generated by capitalism look like differences generated by nature. Because a computer is saying that Black people commit more crime or that women can’t be software engineers, it must be true. To paraphrase one right-wing commentator, algorithms are just math, and math can’t be racist. Thus machine learning comes to automate not only the production of inequality but its rationalization. |
The shift from looking at unintended side effects, of leaks for example, to intentionally surfacing or creating side effects reminds me of your project “Unfit Bits.” You and your collaborator Surya Mattu demonstrate all these ways to “hack” a Fitbit by making it register steps when you’re not actually taking them, such as attaching the Fitbit to a dog’s collar or the wheel of a bicycle. That project is about deliberate subversion — side effects intended by the hacker, but not by the creator of the system being hacked. | I haven’t really thought about leaks and “Unfit Bits” together, but that project is also about the manipulation of infrastructure and the deliberate glitching of a dataset for a different political end. Whether you look at leaks in a water system or leaks in an information system like the Panama Papers or the Snowden leaks, both are about a redistribution of power. In the context of fitness trackers and employer-provided health and life insurance, the pitch is that tracker data provides an accurate picture of a person’s health. That’s a very political claim, especially in the US where access to insurance is commodified and not universal. As a result, employers are handing over employee fitness data to private insurance companies as part of workplace wellness programs, and some of them even penalize their employees for refusing to wear these trackers. Life insurance companies are using Fitbit data to help determine premiums. You’ve got this very fraught situation where a dataset is playing a critical role in how people get access to essential services. |
These open-world playgrounds extend the opportunity to act in ways often denied to us in the physical world. The criminality of games like Grand Theft Auto and its imitators may grab headlines, but often it’s the far more innocuous moments that feel the most freeing. Spotting something interesting in the distance and running over to investigate it. Driving around the city with the radio on full blast. Finding a scenic spot and just enjoying the view. All three are simple things we take for granted in open-world games, but are more than enough to arouse suspicion in the physical world if seen by someone who doesn’t like the look of you. Simply being able to leave your character idling is a freeing act when, in the real world, loitering can result in your arrest—especially for people of color. (In a particularly egregious example from April 2018, Philadelphia police arrested two black men just for sitting in a Starbucks without ordering anything.) With our actions constantly monitored and our freedom to explore eroded, it's no wonder that more and more of us are choosing to spend our time in virtual worlds instead. | The year-on-year growth in computing power means that our open-worlds are growing richer and more immersive all the time. Today, video games can sometimes feel more fulfilling than the real world. The Legend of Zelda was inspired by creator Shigeru Miyamoto’s childhood exploration of the forests and caves around his home in Sonobe, Kyoto Prefecture. Sonobe is no more, having been merged into the city of Nantan in 2006. However, millions of people play in the “miniature garden” that Miyamoto tells interviewers he sets out to create in each game, a fantastical recreation of a countryside that no longer exists. Can our miniature gardens really replace the real thing? Could they one day? No Exit The idea of humanity abandoning the physical world for the freedom of a virtual life has long been explored in science fiction, but the concept hit the biggest mainstream stage possible in the Steven Spielberg-directed blockbuster Ready Player One that came out in March 2018. In the movie (as well as the book it’s based on), much of humanity finds an escape from their desolate Earth in a virtual-reality universe called the OASIS. The OASIS takes open-world gaming’s promise of freedom and makes it a reality: inside, you really can be or do almost anything. It’s an idea as unrealistic as any of the technology that appears in the film. The appeal is obvious when you live in a world of countless restrictions. In the end, though, our games are still the product of that world, with the same issues we play to escape. A woman playing Grand Theft Auto may be free to explore Los Santos or Liberty City in a way that she could never explore her own city, but she has to do it in the body of a man, in a world filled with sexism. The female player of Grand Theft Auto is encouraged to murder sex workers. The queer player of The Witcher has to role-play heterosexual romance. The player of color who picks up most of the games in the Far Cry series is going to find themselves in a stereotypically “exotic” land, shooting its stereotypically “exotic” inhabitants. Even within the seemingly infinite possibilities of the digital world, the marginalized will still find themselves at the margins. |
But as it turns out, different units within the FAA had similar records in their possession and redacted them in different ways. The FAA wound up producing the same emails in redacted and unredacted forms, inadvertently revealing how subjective the process really is. What to keep and what to omit appeared to be dependent on the momentary feelings and fleeting judgments of whoever was doing the redactions. | The email conversations show FAA employees casting doubt on General Atomics’ claim that their technology is safe. In one email, an FAA employee notes that the company is “worried about getting an approval” and that time will run out before the permit process can be completed because, “as usual, they have diplomats from the military attending and want to demonstrate the capability.” Secrecy is the way in which government agencies reveal their mistrust of the people. The cloak is needed, you’ll often hear officials say, to keep everyone safe. But safe from what? Or better yet, from whom? Secrecy breeds its own kind of distress, encouraging a fear of the unknown that justifies new levels of authority in a slow-moving but perpetual crisis. Secrecy and surveillance are two sides of the same coin. Both stem from an attitude that the people atop our institutions know better than the rest of us how to govern and structure society. Both engender mistrust and inhibit collective action. They engender mistrust by encouraging us to outsource responsibility for safety. This inhibits collective action, not only by chilling our speech and association but by ceding authority to institutions we mostly do not participate in. Do It Yourself Why would I, of all people, want to seek a public record? It could be to learn about an issue that affects you or your community. It could be to find more bulletproof evidence of something you know or suspect that will help with your advocacy work. It could be an entry point into the workings of politics or profit by finding the traces left behind by the companies that interact with the government. How do I get started? Don’t be intimidated. We had no idea what we were doing when we got started either. No one is born with this knowledge. |
All of this — Atyrau’s extreme security measures and the steady flow of American businesspeople — comes from the fact that the city is home to Kazakhstan’s biggest and most important oil extraction project. In 1993, shortly after the fall of the Soviet Union, the newly independent nation opened its borders to foreign investment. Kazakhstan’s state-owned energy company agreed to partner with Chevron in a joint venture to extract oil. The project was named Tengizchevroil, or TCO for short, and it was granted an exclusive forty-year right to the Tengiz oil field near Atyrau. Tengiz carries roughly 26 billion barrels of oil, making it one of the largest fields in the world. Chevron has poured money into the joint venture with the goal of using new technology to increase oil production at the site. And I, a Microsoft engineer, was sent there to help. Cloud Wars Despite the climate crisis that our planet faces, Big Oil is doubling down on fossil fuels. At over 30 billion barrels of crude oil a year, production has never been higher. Now, with the help of tech companies like Microsoft, oil companies are using cutting-edge technology to produce even more. The collaboration between Big Tech and Big Oil might seem counterintuitive. Culturally, who could be further apart? Moreover, many tech companies portray themselves as leaders in corporate sustainability. They try to out-do each other in their support for green initiatives. But in reality, Big Tech and Big Oil are closely linked, and only getting closer. The foundation of their partnership is the cloud. Cloud computing, like many of today’s online subscription services, is a way for companies to rent servers, as opposed to purchasing them. (This model is more specifically called the public cloud.) It’s like choosing to rent a movie on iTunes for $2.99 instead of buying the DVD for $14.99. In the old days, a company would have to run its website from a server that it bought and maintained itself. By using the cloud, that same company can outsource its infrastructure needs to a cloud provider. | The market is dominated by Amazon’s cloud computing wing, Amazon Web Services (AWS), which now makes up more than half of all of Amazon’s operating income. AWS has grown fast: in 2014, its revenue was $4.6 billion; in 2019, it is set to surpass $36 billion. So many companies run on AWS that when one of its most popular services went down briefly in 2017, it felt like the entire internet stopped working. Joining the cloud business late, Google and Microsoft are now playing catch-up. As cloud computing becomes widely adopted, Amazon’s competitors are doing whatever they can to grab market share. Over the past several years, Microsoft has reorganized its internal operations to prioritize its cloud business. It is now spending tens of billions of dollars every year on constructing new data centers around the planet. Meanwhile, Google CEO Sundar Pichai announced that in 2019, the company is putting $13 billion into constructing new offices and data centers in the US alone, the majority of which will go to the latter. |
Drones are being heralded as revolutionary in agriculture, but they have many known vulnerabilities since their security is essentially unregulated. They can be hijacked, remotely tampered with to return false data, or piloted to infiltrate remote Wi-Fi networks. The US Department of the Interior also sees espionage risks in the fact that its own drone fleets are manufactured by a Chinese company. | The tools of precision agriculture generate valuable datasets, and the value of those datasets corresponds to their size. To give a sense: the largest dairy farm in the world occupies more than 22 million acres and the largest field operation sits on more than 500,000 acres. Data from these operations can move markets. The US Department of Homeland Security has reported that at least one company has been approached by commodity brokers with an offer to buy the company’s data. But insider trading threats aside, the integrity of large operations’ data can be undermined by weaknesses in any number of factors that make precision agriculture work—IoT devices, software, or physical hardware. Although such market sabotage doesn’t appear to have happened yet, agricultural data breaches have: the security industry’s annual data breach compilation identified eleven data breaches among agricultural companies in 2019. |
What we’re doing at the Indigenous Futures Institute is we’re really thinking about ways to do that. This work is couched within our design lab and engineering school. We’re intentionally trying to make it as multidisciplinary as possible, but sometimes it’s hard to be disruptive and chaotic in academia. But we’re trying our best, you know. We have a range of different focuses, but it’s about codesigning technology with communities and being iterative about that process. That could range from the story I just told you about what happens when Indigenous people use genome editing to think about our past, or what happens when Indigenous people use machine learning and deep learning to shine a light on the past. It could include automating our archival records. It could also be thinking about creating deterrent and safeguarding technologies to avoid museum collections from sequencing our ancient genomes without our consent. There could be a number of different ways that we use those technologies. | Then there’s the focus on education, educational tools and creating a safe space for all of our Indigenous and historically marginalized communities of people to come, learn, and be productive and innovative. We have a huge focus on sustainability, whether it’s environmental sustainability or looking towards Indigenous innovation. Within architecture and urban planning, what does it mean when Indigenous communities are in the driver’s seats of those fields? And how do we actually begin to look towards knowledge that is quite ancient for modern solutions? So that’s it in a nutshell. Some of the people that are involved there are Provost Wayne Yang and Theresa Ambo, Sarah Aarons, Manuel Carrillo. That’s the core group, but it’s very new. We’re also thinking about Indigenous takes on economics. Like, why is your financial projection quarterly? Why isn’t it, you know, ten generations? Does decentralization guarantee equal distribution of power? Who do decentralization and distributed technologies currently benefit, and who should they benefit? The distributed web has become a common buzzword covering everything from the blockchain to peer-to-peer communication protocols and file storage. Decentralized web projects purport to solve the inherent problems with the internet as we know it today, promising to democratize internet governance through distributed ownership and control over web infrastructure. |
The strongest example of this multimodal way of communicating are the parts of the film when my poetry and the oil-paint animations of the British artist Em Cooper converse with each other. In Cooper’s constantly flowing work, no image is static. Figures emerge briefly and then merge into the background, before re-forming into other figures; everything blurs into everything else. In a dynamic that seemed to reproduce the differences between speech-dominant cultures and more multimodal ways of connecting, other animators who were approached to work on the film took my words too literally, pairing the lines “The ear that hears the cardinal hears in red” with a cartoon cardinal and “The eye that spots the salmon sees in wet” with an animated salmon. Cooper’s brushstrokes, by contrast, were full of color, motion, and texture, occasionally offering a fleeting trace of vines, volcanoes, waves, or flags—metaphors I had used elsewhere in my writing to describe myself or challenge the world we lived in. Cooper and I never thought we were taking everyone in the audience to the same destination; instead, we offered people multiple pathways into a world in which everything is interwoven, where motion, rhythm, pattern, color, sound, and texture freely interact, offering endlessly unfolding possibilities. I recognized, however, that this was a rarefied means of communicating; not something that could always be open to me, let alone anyone else. Four years later, at the outset of the pandemic, I began to ask myself how technology might allow us to create new communities in which diverse bodies, voices, and language might come together, as they had in my collaboration with Cooper, and thrive, much like we all had in kindergarten. Cut off and segregated in my own home, I turned again to poetry and technology to create some alternative pathways: co-teaching multigenerational, global, and intersectional poetry writing courses for beginning poets, and collaborating with three fellow poets based on the artwork of the artist Malcolm Corley, who is also autistic. In both the courses and the collaboration, speakers and alternative communicators came together to make work that challenged the supremacy of speech-based culture. Traces of our entanglements live on in a chapbook, Studies in Brotherly Love. In the introduction, poet Claretta Holsey describes our modes of communication this way: “We crafted poems that speak to us and to our causes: awareness of performative utterances, as communication can and does happen outside of written text, outside of simple speech; embrace of Black vernacular, its rhythm and blues; recognition of the Black family as a model of resilience; respect for nature, which awes and overwhelms; respect for the body, made vulnerable by want.” | Imagining that technology alone can liberate us is a bit shortsighted and, in some ways, disabling. But, if we imagine the cultivated garden of a speech-based society is the only way of being, then the communication technologies we build will continue to keep us stuck in an inclusion/exclusion binary, in which some beings are seen as disposable and others not. |
I mean, everything I’ve ever worked on has failed. I’ve worked on some ambitious projects at several of these big companies, and none of them have succeeded. But I’ve still been rewarded and promoted. And I think that’s a good thing about Silicon Valley. Failure isn’t looked down upon, which is a positive aspect of tech culture. | When you fail inside a big company, does it still feel like failure? The average time spent on a team is well under two years at most of these big companies. So when a company wants to change direction and abandon a product, people usually don’t take it that hard because they weren’t planning on being there for very long anyway. |
Scope Creep Given this history, it seems likely that electronic monitoring will continue to become more popular, especially with the advent of new technological capabilities. The state-of-the-art LOC8 device introduces an accelerometer, which measures the user’s movement. This means it can now function as a tracking device in the same way as a Fitbit does—only users are legally required to wear it. Parole officers could know when the wearer is sleeping, moving, and more, all using acceleration data. | As the technology improves, an important question technologists must ask is whether we should build these tools, not whether we can. For most wearers of electronic monitors, being out in the world is better than being in prison, even if one is under surveillance. But like many techno-utopian solutions, the reality of electronic monitoring as an alternative to prison is far removed from the fantasy. The user experience of ankle monitors lags behind even basic consumer tracking products, like the Fitbit or the Apple Watch. Although companies call the people who wear the monitors their “clients,” their real customers are police departments and prison systems. The website of BI Incorporated, an electronic monitoring provider, highlights their “easy-to-use mapping features” and twenty-four-hour call centers. The priority is selling government officials on the product—not the comfort and quality of life of the person wearing the device. |
What exactly is a “data body”? Recently, we’ve seen the rise of various tools that let you see the data that companies have about you. But your approach seems more holistic. It’s not just, does Facebook have this data about me, but what are all of the different pieces of data that the government or private companies might have—and what am I able to do about it? We try to get community members to think not just about the impact that their data has on them, but the impact that their data has on the decisions that affect their family, their neighborhood, and their city. One of our exercises is called “What’s in Your Wallet?” As part of that exercise, we often use the example of a person who uses their EBT card at the liquor store up the street and purchases foods that aren't considered healthy. Those activities create a data trail. Maybe a bank will look at those data trails and decide not to invest in a grocery store in that neighborhood. It's difficult to know exactly how some decisions are made, but what we do know is that data is leveraged to make most of those decisions. | That’s what we mean by data bodies. It’s not just the individual; it's the information that's been generated about this individual and the systems that interact to make decisions about this individual, this individual's family, this individual's neighborhood—all the data’s tentacles. In the Digital Defense Playbook, you cite a conversation with a community member who says about interacting with government agencies, "For their benefit they do communicate. But for my benefit, no." And I see this a lot in government work—if agencies want to communicate to surveil and penalize, then they can and will. But if you want them to share information to, say, verify that you’ve lost your job so that you can get food stamps, they often can’t or won’t. |
It was really neat to see the grassroots unification around inclusive ideals, without having to push for an official set of “values.” In a way it was easy because the company was small. It was about thirty or thirty-five people when I left in 2014. In a group that size, it is pretty easy to have value alignment without too much structure. | That’s thirty-five people on the engineering team, or in the whole company? That was the entire company. Oh wow, okay. I didn’t realize it was that small. Yeah, that was what so neat about OkCupid: how many people are reached and impacted by such a small team. You’ve spoken before about the internet literally saving the lives of queer teenagers. I wonder if you could start by talking about that a little bit. |
Engineering culture is about making the product. If you make the product work, that’s all you’ve got to do to fulfill the ethical warrant of your profession. The ethics of engineering are an ethics of: Does it work? If you make something that works, you’ve done the ethical thing. It’s up to other people to figure out the social mission for your object. It’s like the famous line from the Tom Lehrer song: “‘Once the rockets are up, who cares where they come down? That’s not my department,’ says Wernher von Braun.” So I think that engineers, at Facebook and other firms, have been a bit baffled when they’ve been told that the systems they’ve built—systems that are clearly working very well and whose effectiveness is measured by the profits they generate, so everything looks ethical and “good” in the Google sense—are corrupting the public sphere. And that they’re not just engineers building new infrastructures—they’re media people. | Several years ago, I spent a lot of time around Google engineers who were connected to the journalism enterprise early on. They had a robust language around information control and management. When the conversation shifted to news, however, they had no idea what the conversation was about. News was something different. |
Target Acquisition Ever since Mel Farr helped popularize the first generation of SIDs, lenders have stood by the claim that SIDs allow them to fill the crucial role of extending credit to people who would otherwise not qualify for the cars they desperately need to get to work—the so-called “subprime” market. According to Mother Jones, in 2016 about 70 percent of these loans used “payment assurance” devices, which includes SIDs in their early and later forms, as well as GPS trackers without SIDs. | This is a multi-billion-dollar market. Auto dealers target these millions of borrowers with low or non-existent credit scores for loans even without SIDs because, despite the potential for default, they can still make money both by inflating car prices and by charging high interest rates. And the market is growing: In 2019, a record seven million Americans were over three months late on their auto payments, largely due to the increasing number of subprime borrowers. Theoretically, SIDs could benefit borrowers by helping reduce these interest rates, which should be based on the risk for lenders. But in reality, interest rates are “not [reduced by] installation of the [SID],” according to testimony by Sophia Romero of Legal Aid of Southern Nevada. Instead, SIDs encourage lenders to offer larger loans on higher value vehicles, with the highest possible interest rates. In a 2012 survey of 1,300 dealers, “nearly 70% of respondents indicated that they believed the use of devices allowed them to sell higher value… vehicles with less risk.” This belief comes from the fact that lenders can continually threaten to remotely disable the vehicle while keeping drivers in their cars, paying for as long as possible until they default—at which point, the lender can quickly repossess the car and sell it to someone else. And in the background, lenders package and sell these predatory loans as asset-backed securities to banks. This should sound familiar—in housing, it’s what caused the 2008 financial crisis. And just like in 2008, it’s the borrowers, not the banks, who suffer. So, it’s true that SIDs reduce risk so lenders can extend credit to borrowers who otherwise wouldn’t get it. But SIDs have nothing to do with someone’s actual ability to pay, which means borrowers’ pockets are drained while lenders jump indiscriminately from one borrower to the next. Keeanga-Yamahtta Taylor describes a similar dynamic in the housing market as “predatory inclusion,” where mortgage lenders in the 1960s targeted previously excluded, mostly Black borrowers, with home loans that were federally insured to reduce lender risk. Instead of actually supporting these borrowers, they raked in as much money as possible until borrowers—who were portrayed as unintelligent and irresponsible—foreclosed. |
By the 1960s, utility bills, catalogs, advertisements, invoices, receipts, and other forms of impersonal, bulk communication had come to account for more than 80 percent of all mail—clogging up the mailstream, but also providing a critical revenue stream. It was clear to postal management that, on their own, hardware innovations like the Transorma would not be enough. Accordingly, in 1963 the Post Office introduced what might be best understood as its first innovation in software: the five-digit ZIP code. Above all, ZIP codes served as a new standardization protocol, transforming an unruly map into an efficient mosaic. Encoded in each five-digit string was a surfeit of data, helping to direct each parcel through a carefully delineated geographical hierarchy, from regional processing plant down to localized delivery zone. Not only did this numerical logic significantly simplify manual mail sorting, it also greased the skids of mechanization. | Homegrown alternatives to the Transorma were developed throughout the 1960s, designed specifically to take advantage of the new ZIP system, which theoretically enabled faster keying by workers. As installation expanded, sorting machines began to play the part of crucible for a brewing hostility between postal workers and management. By 1968, the Post had purchased 145 Multiple Position Letter Sorting Machines (MPLSM) designed by the Burroughs Corporation, famous for its hand-crank calculators. Unlike the Transorma, which advanced to the next letter only after the clerk had entered a code, the new American-built MPLSM made pacing a point of contention. Workers wanted to be able to advance the sorting machine themselves, letter by letter, allowing for more flexibility and precision in the keying process. Management wanted to program the machine to advance at automatic intervals, maximizing productivity and ensuring predictable throughput. The distinction between operator-pacing and machine-pacing was the subject of considerable research: Which was more efficient? More sustainable? More cost effective? Consultants were hired to conduct extensive psychophysical studies, monitoring eye movements and keystrokes, fatigue and focus, hoping to determine the optimal balance between speed and accuracy. Despite the findings in these reports, management opted for machine-pacing, disregarding the ample evidence that this would greatly reduce overall efficiency and further degrade working conditions. The tug-of-war over the MPLSM was not an isolated incident. Grievances over wages and working conditions—facilities were dated and deteriorating, the hours were getting longer, the productivity quotas higher—were piling up in postal facilities across the country. But disputes over the role of technology in particular helped to set the stage for, and ultimately played a starring role in, the most significant reshaping of the Postal Service in its history. |
Pit Schultz was sitting in a Kreuzberg art gallery, less than a mile from the ruins of the Berlin Wall, when he sent one of the first emails to a mailing list that he had just helped launch. Schultz laid out a vision for what it might become: It should be a temporary experiment to continue the process of a collective construction of a sound and rhythm—the songlines—of something we are hardly working on, to inform each other about ongoing or future events, local activities, certain commentaries, distributing and filtering textes, manifestos, hotlists, bits and blitzmails related to cultural politics on the net. It’s also an experiment in collaborative writing and developing strategies of group work… The list is not moderated. Take care. | It was June 1995, and the internet was changing in fundamental ways. The US government–funded National Science Foundation Network (NSFNET), once the backbone of the internet, had been decommissioned a few months earlier. New companies like Amazon, Yahoo!, and Netscape were racing to cash in on an early wave of commercialization, and a rising priesthood of techno-utopians gathered around Wired magazine—launched in San Francisco in 1993—to herald the coming digital economy as the harbinger of a more unified, democratic, and horizontal world. |
These taxi workers were experiencing the very earliest stage of a global re-organization of private and public transportation, fueled by billions of dollars of financing from venture capitalists. Under the shadow of the Great Recession, in a period of high unemployment and slow job growth, Uber, Lyft, and their erstwhile competitor Sidecar used this technocapital to begin offering almost anyone with access to a car a way to make money by driving people around San Francisco. The companies aggressively marketed themselves as disruptors of the transportation industry; consumers and commentators, seduced by on-demand, technology-fueled mobility and the prosocial promise of what the companies called “collaborative consumption,” enthusiastically adopted their narrative. | Cabbies, however, saw Uber and Lyft as well-financed corporate continuations of the taxi companies that had long subjugated them. “This isn’t about technology,” Mark, a long-time taxi worker and advocate told me in 2013. He explained that, for the previous few years, San Francisco taxi drivers had already been using an app called Cabulous, which essentially did what Uber and Lyft were doing. “They claim they’re innovative and new, but we already have this technology,” he went on. What was different, Mark described, was that Uber and Lyft had “a new exploitative business model,” though it was just “one step removed from the leasing model that the taxi companies have been using for years.” Since 2012, much of the positive discourse around Uber and Lyft has continued to regurgitate the notion that these are companies built on technological innovations that brought new forms of transportation to people and places who needed them. Meanwhile, critiques of these companies, and of the gig economy as a whole, have typically seen Uber and Lyft as breaking sharply from earlier modes of employment to create new forms of precarity for workers. In both cases, the public discussion tends to see these companies as creating major discontinuities, whether of technology or of labor models. What Mark pointed out, however, is that Uber and Lyft are in many ways not as different as we tend to think from the taxi companies that prevailed until 2012. In 2020, almost a decade after the advent of Uber and Lyft, we seem to be at another turning point. The ride-hailing industry is facing a wave of militant self-organizing and claims to employment status by drivers. So far, the most significant mobilization has been the fight over AB5, a California assembly bill that was signed into law in September 2019, and which makes it much clearer that drivers should be treated as employees of Uber and Lyft. The companies have fought this reclassification in myriad ways, and some drivers fear that it may cause them to lose their flexibility. But those who have welcomed the passage of AB5 hope it will deliver them many of the benefits—from healthcare to a guaranteed minimum wage—that Uber and Lyft have so far denied them. On all sides of the issue, no one doubts that we are at a critical juncture in the history of labor and urban transportation. |
While there are certainly emancipatory potentials here, they are far from adequate to the task of planning production in a post-capitalist world. The digital socialist focus on algorithms presents a serious problem. It risks constraining the decision-making processes of a future socialist society to focus narrowly on optimization: producing as much as possible using the fewest resources. To travel down this road is to ignore and discard vast amounts of qualitative information, which remains crucial to achieving many of the ends and goals of a socialist society. After all, the societies of the future will want to do more than just produce as much as possible using the fewest resources. They will have other goals, which are more difficult to quantify, such as wanting to address issues of justice, fairness, work quality, and sustainability—and these are not just matters of optimization. This means that, no matter how powerful the planning algorithm, there will remain an irreducibly political dimension to planning decisions—for which the algorithm’s calculations, no matter how clever, can only serve as a poor substitute. Algorithms are essential for any socialist planning project because they can help clarify the options among which we can choose. But human beings, not computers, must ultimately be the ones to make these choices. And they must make them together, according to agreed-upon procedures. | This is where planning protocols come in. They streamline decision-making by clarifying the rules by which decisions are made. Deployed in concert with algorithms, protocols enable a range of considerations—besides those available to an optimization program—to enter into the planning process. We might say there is a division of labor between algorithms and protocols: the former discard irrelevant or duplicate options, clarifying the decisions to be made via the latter. Putting both algorithms and protocols to work, people can plan production with computers in ways that allow their practical knowledge, as well as their values, ends, and aims, to become integral to production decisions. The result is something that neither capitalism nor Soviet socialism allowed: a truly human mode of production. The Price Is Right Any serious attempt at socialist planning has to reckon with the problems posed by the “socialist calculation debate,” a decades-long argument that has influenced how generations of socialists have imagined a post-capitalist future. The right-wing Austrian economist Ludwig von Mises kicked off the debate in 1920 with “Economic Calculation in the Socialist Commonwealth,” a full-frontal assault on the feasibility of socialist planning. At the time, this wasn’t just a theoretical question. The revolution was already well underway, not only in Russia, but also in Germany, and very nearly in Italy and other countries. Socialists claimed that, with the capitalists cast aside, they could use modern machinery to construct a new type of society, one oriented around human needs, rather than profit. Everybody would get access to the goods and services they needed, while working less. Mises argued that socialists were wrong on both counts. Instead, people in a socialist society would work more hours and get less for it. That’s because, in his view, the efficiency of modern economies was inextricably connected to their organization via the market, with its associated institutions of money and private property. Get rid of these institutions, and the technologies developed over the course of the capitalist era would become fundamentally worthless, forcing societies to regress to a less advanced technological state. |
I think it’s deeply unhealthy. I think it has caused a lot of mental health problems that founders are not able to talk about, because it would hurt their prospects at raising another round and hiring people, not to mention keeping bread on the table. That’s the thing: as an employee, you can just show up and do the job, to pay the bills. As a founder, you still have to pay the bills, but you can’t divorce yourself from your work on an ideological level. Even if you’re super burnt out, you can’t check out—you have to keep performing that founder role. This wasn’t the case for me, but I imagine that for younger founders labeled as some sort of wunderkind that this dynamic would be even more damaging—especially if the company fails, as most do. | AL: The way people view you affects how you raise money. That is definitely true. JW: I remember a conversation I had with a mentor of mine where I told her, “I only want to say things that are true.” To me, words and language are almost the only things that we have; they’re so precious. They’re how we dream and make commitments to each other. And my mentor laughed at me, in all kindness. She said, “Never start a company. You will constantly have to say things that are not true to you, if that is what is required for the company to exist.” There’s also actual danger with saying things that are not true to you—the cognitive dissonance is so painful that you shift your self-concept for saying those words to make sense. It’s the same reason why manipulation tricks, like getting someone to do a favor for you to make them like you, work. |
There’s another angle on Agile, though. Some people I talked to pointed out that Agile has the potential to foster solidarity among workers. If teams truly self-organize, share concerns, and speak openly, perhaps Agile could actually lend itself to worker organization. Maybe management, through Agile, is producing its own gravediggers. Maybe the next crisis of software development will come from the workers themselves. | A previous version of this article incorrectly identified Al Tenhundfeld as a co-author of the Agile Manifesto. The present version has been corrected. Net zero has gone viral. Everyone is announcing their net-zero greenhouse gas emissions targets, from Saudi Arabia to Australia. By the final day of COP26—the annual United Nations Climate Change Conference held in Glasgow in 2021—more than 130 countries had made net-zero pledges of one kind or another, representing about 70 percent of the world’s UN-recognized nations. The private sector has also rushed in: more than 30 percent of the world’s 2,000 biggest public companies have committed to net-zero targets, including Amazon, Walmart, and ExxonMobil. A global ambition has coalesced. It’s an achievement of sorts. Limiting warming to 1.5°C requires reaching net zero by midcentury, according to the Intergovernmental Panel on Climate Change (IPCC). This doesn’t mean zero emissions; rather, it means that some remaining amount of positive emissions would be canceled out by “negative emissions”—that is, by carbon removals. Negative emissions can be generated by using ecosystems like forests, farms, and oceans to store more carbon, or by using industrial technologies like carbon capture and storage (CCS) to pull carbon from the atmosphere and store it in rock formations. When these negative emissions balance out the positive emissions—when the amount of carbon being taken out of the atmosphere equals the amount of carbon being put into the atmosphere—then net zero is reached. |
Packing and Cracking To understand how gerrymandering works, consider a pile of stones, some blue and some red. You have to group the stones into as many piles of three as possible. But you want to arrange the piles so that blue wins—in other words, you want the maximum number of piles to have a majority of blue stones. | Suppose the pile has fifteen stones: nine red and six blue. Even though there are more red stones—red has a three-to-two advantage over blue—this is a combination that can nonetheless be grouped into piles of three so that blue has the overall majority. Forming groups of three blue stones must be avoided, for only two stones are required for blue to “win” a pile. Similarly, it is ill-advised to place a blue stone with two red ones: blue does not win that pile, so again a blue stone gets wasted. The optimal solution for a blue victory is to form three piles of two blue stones and one red stone, and two piles of three red stones. |
Something of a Revolution In January 2006, Ric Sternberg made what he thought would be a simple phone call to his power provider, Pedernales Electric Cooperative (PEC). PEC is the largest REC in the country, covering over 8,000 square miles of territory west of Austin, Texas. Sternberg wanted to get information on any incentives or programs offered around residential solar systems. To his dismay, it turned out there weren’t any. Worse, nobody at PEC seemed much interested in the topic. When Sternberg broadened his inquiries to try to learn more about the co-op’s governance policies and how rules could be changed, he quickly found himself hitting walls—walls clearly meant to undermine member-owners like him from exercising their rights and obligations. Basic information was locked away, meetings were only open to management and board members, and the board—which was supposed to be elected by the membership—had set up a system whereby the only people who could run for election were those selected by a board-approved nominating committee. What was supposed to be a democratic institution had been taken over by a small group of power brokers. In response, Sternberg organized something of a revolution, which he recorded and analyzed in a charming amateur documentary. Over the next six months, Sternberg found more co-op members who shared his concerns, and together they formed an activist group called PEC4U. At first, they tried to work within the system by making moral appeals to the nominating committee and the board to allow more people to run for election. When these pleas went unheard, PEC4U escalated accordingly. They used traditional activist techniques to spread the word about PEC’s antidemocratic leadership and build a movement to take back control of the co-op: flyering and tabling at co-op events, canvassing various towns and communities across the territory, and phone banking. They also organized a class-action lawsuit centered on PEC’s opaque administration and exorbitant board salaries, and how it was using its profits from selling electricity. The lawsuit proved to be an especially useful tactic, as it forced PEC to open up previously confidential records of its management and accounting practices. This revealed even deeper issues of corruption that had taken root at the highest levels. The new discoveries, combined with organizing efforts by PEC4U, led to hundreds of angry co-op members showing up at the 2007 annual membership meeting to demand changes. This was a stark contrast to the previous year, when only a handful of people showed up to present softer criticisms and milder demands. | The unprecedented level of mobilization by co-op members, combined with growing scrutiny from state legislators, cracked open the once-impenetrable fortress of the PEC administration. Key figures resigned or announced their retirement, including the general manager, who had held his position for three decades and would soon face criminal charges for embezzlement and money laundering. Under pressure, the remaining members of the board amended the election bylaws, and in 2008, PEC had its first truly democratic election. Fifty-eight candidates ran for the five contested seats, and practically all of the incumbents were ousted. With the membership back in control of PEC, the gears finally began to turn on the issue that had initially kick-started Sternberg’s crusade: the co-op committed itself to one of the most aggressive decarbonization goals in the country. Encoded for Democracy What is striking about the PEC saga is just how quickly the administration was overthrown, relative to similar efforts targeting for-profit utilities. It took about two and a half years between Ric Sternberg making his first phone call to the co-op and the total overhaul of the bylaws and the board. Far less ambitious campaigns advocating reforms at IOUs tend to take much longer to succeed—if they ever do. |
Data is useful for capitalism. That’s not new. What’s new is the scale and significance of data, thanks to breakthroughs in information technology. Take scale. Digitization makes data infinitely more abundant, because it becomes much easier to create, store, and transmit. You can slap a sensor on almost anything and stream data from it—an assembly line, a gas turbine, a shipping container. Our ability to extract information from the productive process in order to optimize it has reached a level of sophistication far beyond anything Taylor could’ve ever imagined. | But observing the productive process isn’t the only way we create data. More broadly, we create data whenever we do anything that is mediated or monitored by a computer—which, at this point, is almost everything. Information technology has been woven into the entire fabric of the economy. Just because you’re not directly using a computer doesn’t mean you’re not making information for someone somewhere. Your credit score, your healthcare history—simply by virtue of being alive in an advanced capitalist country, you are constantly hemorrhaging data. |
We highlighted the fact that they were telling us the system was wrong at least 350 times and that they were relying on the naked eye of two analysts to catch the errors of the algorithm. This caused great concern about the potential of false arrests. So, we dissect what's being said and present that back to people consistently. | You’ve mentioned how Detroit is 80 percent Black, a stark contrast to other, primarily white cities that have banned facial recognition. How does your fight in Detroit look different from the fight in whiter cities like San Francisco or Boston? We've built relationships with organizers in pretty much all the other places that have successfully banned facial recognition, including Portland, where folks are pushing for the most comprehensive ban. We've consistently told them: we need to synthesize what you all have done, but also apply a racial lens to it because it's going to be much harder to convince not just Detroiters but the world that a Black city doesn't need to be surveilled. It's a deeper, larger conversation and it requires a kind of organizing around anti-Black racism that often isn’t comfortable to do even in social justice or liberal spaces. |
What was it like talking to that kid who was your student’s friend? I mean, it’s very painful talking to these kids. Because he was, you know… there is this sense that it’s impossible to imagine a full life in these places. I hope that’s changing quickly in Bulgaria, as it’s changing elsewhere. But even the conversations I had with my very privileged students in Sofia were dispiriting, because there’s such a sense of impossibility about things that queer people in the West take for granted: about the possibility of having a visible life, about the possibility of coming out to your parents and your friends. | As for the kid in the village, he went to university in the West. So it’s nice to think of that story having a much happier trajectory now. When I first spoke to him he was probably fifteen, but he did find a way out. He did find a way to a place where there will be much more possibility for him to live openly. |
2/ In this issue, our contributors talk about distribution in many different senses. These include: the distribution of physical objects, like pieces of US mail and utilities like electricity; the use of algorithms to distribute work and wealth and risk; the distribution of computing power, including to devices not typically thought of as “smart.” Distribution links mundane questions of logistics and lofty questions of justice. How should a society spread around life chances? Who should control information about genes that may partially predict them? In the US, education and technology are dubious fix-alls. Around the world, groups of tech workers are making new kinds of demands on how the power and wealth generated by new technologies should be distributed. We see both the global dispersal of ideas and the importance of local differences. It may look different for gig drivers in Bengaluru or Jakarta than at HQ of Alphabet. | 3/ As we closed the issue, vaccination rates in the US picked up and a container ship, having completed tracing an obscene graffito, then wedged itself in the Suez Canal. This year has been a reminder that, even as many of us have been more isolated than we would have ever thought possible, everything remains connected to everything else. |
Recently there was a huge exposé of the sexism that happens at Riot Games, the company that makes League of Legends. I have a friend who works at Riot who told me things that weren’t in the stories that were published—it’s just been toxic for years and years, and only has come to light recently when workers tried to defend them themselves publicly, and were fired. | This is unfortunately very common. In the game development industry, if you are a woman, or if you are trans or nonbinary or intersex, you've always had to deal with sexism or workplace discrimination, whether in the form of outright harassment or lower pay. I am hard-pressed to think of anyone that I know personally in the industry who hasn't. |
More than 150 years later, black workers in Detroit toiling at what were widely regarded as America’s most secure and iconic jobs—the automobile assembly line—called out another form a false innovation. In 1968, radical organizer and editor John Watson decried the prevailing and degrading situation of “speedup, bad working conditions, automation,” in which “one black man does the job previously done by three white men.” According to Detroit icon and activist Grace Lee Boggs, members of the United Auto Workers, including her husband Jimmy Boggs, used the term “man-o-mation” to describe the dynamic. | As Dan Georgakas and Marvin Surkin make clear in Detroit: I Do Mind Dying, the term caught on and for good reason: “In 1946, some 550,000 auto workers had produced a little more than three million vehicles, but in 1970 some 750,000 auto workers had produced a little more than eight million vehicles.” The rapid pace took a devastating toll of people’s health and lives, leaving millions of workers disabled or dead; one study from the period estimated “sixty-five on-the-job deaths per day among autoworkers.” Auto industry executives credited the industry’s productivity boom to advances in machinery, but the predominantly black workforce knew it was in fact due to old-fashioned exploitation, not automation: heavier workloads and unsafe, unhealthy conditions. |
We've definitely seen fund managers who don’t understand why we care so much about Asia, because they don’t have access to the exchanges and because of this perception that there are so many scams. Our perspective is, you have to care, because it's half the crypto volume. As an example, we know some people who shorted Ripple. And don’t get me wrong, I hate Ripple, I think it’s a scam. However, I would never short it, because I know it’s popular in Korea and Japan, so I know that if I shorted it, there would be a bunch of organic demand in Japan or Korea and I would get liquidated. That's exactly what happened to a fund manager we know, twice actually. | What are some of the more interesting blockchain projects in China that you've come across recently? There are areas that I think are exciting. One is that there are a lot of Chinese people making games on EOS, a delegated proof-of-stake blockchain project (ed. For explanation of building games on the blockchain see Singularity Hub’s “How Blockchain Is Changing Computer Gaming”) . EOS originally did a massive ICO and got tons of flack for it. A lot of my hardcore Bitcoiner friends think EOS was a big scam because of the ICO, and because it uses 21 validators to validate transactions rather than a massive number of miners and so it's much more easily censored. (Ed.: In cryptocurrencies such as Bitcoin, the validation of transactions into the canonical blockchain ledger happens through a process of a very large number of distributed “miners” competing to solve a computationally difficult validation algorithm, which imposes a strong barrier against someone who wishes to manipulate the ledger. “Delegated proof-of-stake” is a consensus-based algorithm where people who hold the most tokens vote to nominate a small number of delegates who can validate the network’s transactions. Because there are a smaller number of entities who at the end of the day validate the ledger, this algorithm is in theory easier for a nation state adversary to compromise, if the delegate validators end up being centralized in one geographical area.) My take was, well, I don't want to use EOS if my adversary is a government trying to censor my billion-dollar capital flight out of some nasty jurisdiction, but it’s fine if all I'm doing is making some game that the Apple app store won’t approve. It's not that easy to censor; you still have to go track down not only 21 validators but I think something like 100 back-up validators... It’s actually pretty hard. I think for stuff like games this is going to be in huge—this thesis was confirmed for me when I started seeing people in China making games like EOSbet, DaFuWeng (Monopoly), Fomo3D. I’m convinced that Fomo3D is made by a Chinese team, I've read their code comments, like it's got to be. |
I feel like it’s always a question of, what are you optimizing for? There is this fetish of capitalism as supremely rational. And it does optimize for certain things, like technological innovation. But if you think about it from another perspective, it’s also catastrophically irrational, because what could be less rational than wasting the potential of the millions of people whom capitalism exploits, or, worse, excludes by rendering permanently redundant? Capitalism doesn’t optimize for that. | I feel like I encounter this a lot in conversations with people who are in tech—my colleagues—who often have good intentions, but sometimes it’s hard for them to move their frame of mind from the technical to the political, to move from the technical question of how to optimize the process in front of you to the political question of what are you optimizing for. |
Let’s start by talking about your background. How did you get involved in finance? I was always interested in economics and had a quantitative background. Anyone who succeeds academically where I grew up ends up being very quantitatively oriented. After school, as I was trying to find a profession that would be financially rewarding but would also allow me to use what I studied, I started looking at the financial industry. I ended up taking a job on a trading floor in an investment bank. Most large banks have at least one, typically several trading floors. It’s an actual floor, about the size of a football field, filled with traders who do business with large investors looking to trade stocks, bonds, or futures, or to borrow money. The bank makes money by taking a commission, or by “market-making”—intermediating between buyers and sellers, taking some risk with its own money while it waits for the two sides to match up. | When I think of a trading floor, I think of a bunch of guys screaming into the phone, Wolf of Wall Street-style. It’s not so much people yelling into the phones anymore. The trading floor has evolved quite a bit over time. It used to be more about being alive to the transactional flow of global markets. It’s increasingly about the operations that enable that flow, and the intellectual property that allows people to make money off that flow. I liked it. The trading floor is still where a lot of the actual design and transactions of global markets take place. And it’s stimulating. If you want to use your intellectual muscles, you can do so pretty quickly. You’re not just sitting at a desk somewhere out of the way, or trying to pitch corporate titans with some arbitrary analysis to back you up —which can be more of a salesmanship game and less of an intellectual exercise. Anyway, over time I migrated to the investment strategy part of the financial world. I started helping large asset owners—entities like pension funds and sovereign wealth funds—allocate their money to systematic investment programs. That’s where I migrated to because that’s where most of the financial world was migrating to after the 2008 financial crisis, as everyone realized that the old ways of investing were not really doing what they wanted them to do. Portfolios had been too exposed to the same underlying risks. Technology was now enabling investors to understand their risks better, and to take more direct control over their investments. Part of the shift involved removing human decision-making when it wasn’t perceived as adding any value. |
This is the path of retreat from the digital, towards the “authentically human”—an idea that’s constantly invoked by the new techno-moralists but rarely defined, although it’s generally associated with reading more books and having more face-to-face conversations. The other route is to build a better Blob. | Building a Better Blob Data is the new oil, says everyone. The analogy has become something of a cliche, widely deployed in media coverage of the digital economy. But there’s a reason it keeps coming back. It’s a useful comparison—more useful, in fact, than many of the people using it realize. Thinking of data as a resource like oil helps illuminate not only how it functions, but how we might organize it differently. |
These capabilities deliver on key tenets of neoliberal prison reform. Two out of three ex-prisoners are rearrested within three years of release. Education interrupts the cycle. Inmates who take classes are 43 percent less likely to reoffend. Tablets can extend educational offerings to prisoners at “zero marginal cost,” notes David Fathi, director of the ACLU’s National Prison Project. | They can also serve as digital umbilici, nourishing “social capital” shown to fortify inmates during imprisonment. Studies in Florida and Minnesota linked staying connected to loved ones to reduced recidivism. The findings relate to in-person visits, but prisons are frequently remote from inmates’ homes—hard for friends and family, many of whom are working poor, to reach. Videoconferencing—“video visitation” in correctional parlance—is a boon, allowing prisoners to see loved ones as they talk. |
From Pac-Man to eSports My gaming genealogy isn’t so unusual. In countless family photos, I am holding a Game Boy. My cartridges of Pokemon, The Legend of Zelda, Metroid, Castlevania, Sonic, and Kirby were constant childhood companions. I was nine when I swore to never return to Europe after losing my Game Boy and a bunch of games on a tour bus in Paris. | Aside from occasional multiplayer rounds on GoldenEye 007 with my brother, being a “gamer” (as I understood it then) was a solitary activity. This changed around the time I wrote my parents a two-page single-spaced letter on why they should pay $5 a month to let me have a premium RuneScape account. We had just moved (again) and my new friends in fifth grade all played together every day after school. Playing games was something I had always loved, but the stakes were higher now that I had a new social scene to fit into. Many game scholars would argue that competitive video gaming—or “eSports,” as it’s often called today—has been around since the beginning of video games. In the 1980s, players used to jockey for the highest score on arcade games like Pac-Man. In the 1990s, local tournaments were organized for sports, racing, and fighting games. By the end of the decade, these had become big events, often backed by corporate sponsors. The 2000s brought the growing popularity of eSports to new heights with major annual tournaments like MLG. And eSports became a global phenomenon, with many competitive communities throughout Europe and Asia. South Korea in particular emerged as an epicenter: in 2000, the South Korean government created the Korean e-Sports Association to regulate and encourage the industry. |
Ushahidi went on to become a great success story for Kenyan tech and an inspiration to entrepreneurs across the continent. Its tools for incident reporting have been deployed around the world, often for the purposes of election monitoring, crisis response, and human rights reporting. Soon after launching the site, the Ushahidi crew started a tech hub in Nairobi called iHub. Within a few years, every major African city seemed to have a tech hub of its own, if not several. These ventures in turn built on the success of M-Pesa, a mobile payments system launched in 2007 by the Kenyan telecom Safaricom. M-Pesa revolutionized African banking and, like Ushahidi, quickly went global. The Kenyan tech scene, soon dubbed the “Silicon Savannah,” had arrived. | Leapfrogging Politics The story of Ushahidi illustrates an important point about tech in Kenya, and in much of Africa. Tech is a metaphor as well as a business. It’s a metaphor for a future that is not only prosperous, but also free from politics. African tech promoters often talk about “leapfrogging” to accelerate development: for example, by skipping fixed phone lines and going straight to mobile. The biggest leapfrog of all would be to skip over corrupt states and parasitical elites to a post-political internet where Africa’s large and underemployed youth population might have a shot at escaping poverty. But politics has proved hard to avoid. Even as a desire to transcend politics fuels the fascination with tech, politics finds ways to reassert itself. “Technology, especially digital technology, and politics are inseparable,” Nanjala Nyabola tells me. A leading Kenyan public intellectual and author of the Digital Democracy, Analogue Politics: How the Internet Era Is Transforming Kenya, Nyabola says that tech is often seen as something that “can cure the problems of Kenyan politics.” It’s a promise that tech can’t possibly keep, however: “Unless there is a political change, tech won’t fix it. You’ll just end up chasing your tail.” This dynamic is far older than the Silicon Savannah. Indeed, Kenyan tech has long been shaped by its complex relationship with Kenyan politics. In early 1994, three young Kenyans—two at MIT, one at Harvard—started Africa Online, which would become the leading internet service provider (ISP) on the continent. Initially, the Kenyan government opposed the internet altogether. The state-owned telecom monopoly took out a full-page advertisement in 1995 warning entrepreneurs that offering internet access amounted to resale of services and was therefore illegal. It targeted the internet partly to protect its turf, partly to control political speech in an unstable country, and partly because it could hardly keep up with the demand for fixed voice lines, let alone internet service. |
I actually don’t know if the steps that have been taken around COVID have made things worse, or whether they have improved the warehouses. Internally, people say, “Oh, we’re probably better than our competitions, or other warehousing and logistics companies.” But I don’t know if that’s the case. There were other challenges at the beginning of the pandemic too, right—with supply chains for instance? Supply chains all across Amazon were definitely impacted. It was difficult to get hand sanitizer. It was difficult to get cardboard boxes. We got lucky in the sense that the beginning of the pandemic overlapped with the Chinese New Year. So we had already accounted for some slowdown, because we expected the Chinese New Year to impact timetables anyway. Overall, though, it seems like the pandemic compounded all of Amazon's advantages and significantly reduced the impact of all of Amazon's weaknesses. How so? The crisis advantaged large purchasers. The more scale you have, the more buying power you have, right? If you're a factory, the way that your distribution of customers is set up is that you have one or two customers that make up 80 percent of your business and then a ton of customers who make up 20 percent of your business. So if your production is down 30 or 40 percent, you say, “Okay, I’m going to have to cut out all of the small vendors and only focus on the big partners.” That’s what happened. Amazon was able to get preferential distribution of shipments when smaller companies were totally sold out. | And there’s been so much e-commerce growth in general during the pandemic. The question on everybody's minds in retail is: can Walmart and Target use their local distribution infrastructure to get packages to people's doors faster than Amazon can? Walmart and Target’s only advantage is their physical stores. When the stores are closed, they can be used as distribution hubs. You can pick things up close to where people's homes are and deliver them. Buying Whole Foods gave Amazon the opportunity to reach, you know, 80 percent of the 1 percent. But vast swaths of America aren't reachable by Whole Foods. The rumors that I hear, both internal and external, are that we're very seriously interested in acquiring post office real estate. The reason why the post office is valuable to privatize is because of their real estate holdings. They have great real estate in every downtown of every city in the United States. Amazon may be interested in buying all of the post office locations, and we have the cash to do it. So why not? The other week we announced we're hiring one hundred thousand more workers again. We're expanding dramatically across the board, in part-time and full-time, at corporate and retail and fulfillment and logistics and devices and distribution and all the various pies we have our fingers in. How about AWS? Are you growing the same way? Some of our customers obviously saw a dramatic decline in their income. Some large customers in the hospitality industry and the retail industry negotiated substantially discounted rates or got large one-time credits. But in general, COVID did convince a lot of companies to accelerate their cloud migration, because if you're an organization that has your own data center, chances are that you now have to implement a bunch of safety guidelines and restrictions. Amazon just operates at such a scale that we can do it better and more efficiently. For instance, a lot of security companies throughout the industry were impacted because suddenly their secure facility had entry rules. Only one person could be in there at a time. But in the security technology industry there are a lot of processes that need two or more people to be physically present. So that made it impossible to use a lot of the secure facilities because the guidelines were in conflict. It was kind of a Freaky Friday moment where everybody realized that, in the context of a pandemic or a natural disaster, these procedures they created to make sure that their facilities were safe were actually preventing them from following security best practices. |
Winners and Losers One wonders if it’s possible to carve out a third way between the purely intuitive and the mechanically standardized. Atul Gawande has written extensively about this possibility, depicting a meeting of minds between autonomous doctors and health systems designers—and he manages to do so without making it seem terrifying or fantastical. In this world, technologies might seek to complement and enhance, rather than replace, the physician’s ability to incorporate research into practice. | Natural language processing and dictation will allow physicians to use any words they like while recording notes into an EMR, as opposed to drop-down menus and pick-lists. Artificial intelligences like IBM’s Watson will comb through research on behalf of the physician and aid in clinical decision-making. The doctor’s lounge, an increasingly rare phenomenon, is a basic form of technology that allows physicians to connect and share information. Not all innovations need to be bleeding edge. |
And, of course, joining a movement to fight persecution is the very meaning of political subjectification. Times of political revolt have long been attended by claims about the revolutionary force of challenging traditional sexual prescriptions. And little wonder: sex is a discourse that plays a major role in shaping what kinds of selves get to exist and how they get to exist together. This is the stuff of politics. | The problem with my ex’s position was modal. He viewed certain sex—certainly not all sex—as a necessary rite of passage, without which no appropriate radicalization was possible. His belief touched on the religious—a faith that certain sex acts between certain bodies carried a radically transformative quality a priori. For a man who claimed to be a Foucault scholar, it was a baffling assertion of normative moral facts. But we met when I was very young. He was twelve years my senior, and it took me some time to weed out the hypocrisy and dogmatism from what, if anything, was righteous, or even sexy. |
The spread of knowledge about disability news and affirmations of disability identity via social media have produced profound shifts in my life. I wish I had the mental and physical support as a teenager in the 1990s that I have on the web now. I consider so many people my family that I have never met in person but have known for years. I’ve found intellectual stimulation and resources, as well as validation and camaraderie through social media. My chosen family has swelled in the past decade. | Could you give us some examples from your own life? I would be remiss if i did not tell the story of meeting my wife on Craigslist in 2009. It is so humorous that we met each other on the bathroom wall of the internet. I was seeking intellectual intercourse, and her friend found the ad and made her write me. |
*** I check the back while Aluna’s fast asleep. The trunk, too. Moving as quietly as possible through years and years of our shit, all of it bursting at the seams and threatening to fall across the parking lot. Every exhale is a puff of mist, and I blink to clear my eyes. Journals, clothes, random pieces of silverware, actual physical books, toothpaste, blankets—I bring one out, drape it over Aluna’s body in the passenger seat, then go back to my search. | I find it wedged on the trunk floor, beneath the corner of a box. It must’ve fallen out of the case. Unmarred, a thin silver ring. After scooting back in the driver’s seat, I hover with the ring tucked in my palm. It grows warm there, in the safety of my hand. I don’t ever want to let it go until Aluna’s ready to take it. So I’ll wait till next morning, and see what she says. |
Human on the back end—so does that mean it’s going overseas? In the Philippines or in the Midwest, somebody is tagging parts of the speech, and correcting things, and actually parsing whether I misspelled something or meant to write something else. It’s becoming training data for the machine, and eventually it will get incorporated. | It’s the MVP. Yeah, it is just so much easier to build a web app that connects someone in the Philippines to a series of database questions, and has them do the work, than it is to build an AI than can handle arbitrary responses to a calendar invite. So you just tell your venture capitalists that you’re working on AI, but that some of it still needs to be labeled by hand. That just seems way easier. That’s the business I would start if I were doing that. |
In 1830, Harvard College sued a man named Francis Amory. Amory was accused of mismanaging funds that were intended to be donated to the university once a donor and his wife had passed away. Amory put the money into riskier investments like insurance, manufacturing, and banking stocks—and then lost half the money. Harvard sued and the judge sided with the college against Amory, establishing the “prudent man rule.” The prudent man rule required that investment managers exercise caution and care, avoid speculation, and behave as if the money they are investing were their own. In the 1960s and early 1970s, as the first venture capital firms were getting established, the prudent man rule was interpreted as a restriction on the kinds of risks that early-stage investors could take. This norm effectively prevented venture firms from raising capital from large institutional investors, which capped the possible size of the industry. | If venture capital was going to grow, it would need a different interpretation of prudent investing. It would need to change how investors managed risk. In the late 1970s, venture capital scored a major victory. Congress was crafting a comprehensive set of rules governing private pension management to ensure that workers could feel secure that their retirement savings were being managed responsibly. They applied the prudent man rule to pension investments, which the investment management industry interpreted as a ban on riskier investments such as venture capital. |
Crucially, gatekeeping power subordinates two kinds of users on either end of the “gate.” Content producers fear hidden or arbitrary changes to the algorithms for Google Search or the Facebook News Feed, whose mechanics can make the difference between the survival and destruction of media content producers. Meanwhile, end users unwittingly face an informational environment that is increasingly the product of these algorithms—which are optimized not to provide accuracy but to maximize user attention spent on the site. The result is a built-in incentive for platforms like Facebook or YouTube to feed users more content that confirms preexisting biases and provide more sensational versions of those biases, exacerbating the fragmentation of the public sphere into different “filter bubbles.” These platforms’ gatekeeping decisions have huge social and political consequences. While the United States is only now grappling with concerns about online speech and the problems of polarization, radicalization, and misinformation, studies confirm that subtle changes—how Google ranks search results for candidates prior to an election, for instance, or the ways in which Facebook suggests to some users rather than others that they vote on Election Day—can produce significant changes in voting behavior, large enough to swing many elections. | A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions. |
There’s a degree to which you have to make tradeoffs to create distributed protocols that work, but you need to prioritize certain things from the get-go to make sure that they work for the types of people that you’re building for. Again, a crucial part of decentralization is questioning the universality of technology and technological solutions. What works in Bangalore probably won’t work for a lot of other places. But if it’s free and open source, people can adapt it to their own communities. This assumption that technologies should be universal that everyone should be on one platform is dangerously naive. It flattens the nuances of language and culture and the ways different people engage with information and knowledge sharing. It’s sad that there isn’t enough money to support all the experimentation we need. Obviously, some types of experimentation get injected with an insane amount of money—think Zoom or Facebook, or new platforms like Clubhouse. It’s like they’re injected with steroids. But there are so many ways to communicate and share information. There’s not enough creativity around that. Due to the nature of capitalism and legacies of colonialism and white supremacy, only certain types of people have the capital and resources to support certain types of innovation. Investors and backers tend to support technologies that frame problems in ways that they can understand or relate to. Funding shapes priorities, and priorities shape technologies—this is one of the ways ideology gets embedded into the tools we use. Engage Hypercore! It seems like with your work with Distributed Press, you’re starting with the core principles first, and then figuring out what is useful to build from there. Could you talk a bit about the formation of the project and its goals—what would success look like for Distributed Press? The idea originally came about from discussions between Benedict Lau and I, who both worked on DWeb Camp. He worked on Toronto Mesh, a community mesh network in Toronto, and I met him through my involvement with a community mesh network in Oakland. When we were publishing articles about decentralization ahead of the Camp, it felt wrong to have to publish media about the DWeb on the corporatized, centralized web. We wanted to be able to publish to the DWeb protocols we were talking about, and explore other ways of collective publishing in general. | When we started to discuss what to build, we didn’t want to start from a place assuming we knew what we were talking about. So we did an ecosystem review, interviewing around a dozen people in the IndieWeb and DWeb movements, the journalist/crypto scene, and researchers in rightwing extremism in social media. We wanted to get a lay of the land and talk to people about what’s actually missing. What came out of this research phase was this idea to build a tool that allowed people to easily publish to different protocols, including the World Wide Web, and to have a way to sustainably monetize their work. |
Wow, two years is a long time. Eventually I discovered the United States Digital Service (USDS) on Twitter. I applied to USDS and I went through their very tedious six-interview process. Apparently I passed that. But, because I’m a former officer in a foreign army and they do most of their work with the Department of Veterans Affairs and the Department of Defense, they felt I would never pass their security checks and decided not to hire me. | But there’s a circle of government technology organizations where they all know each other and work together—USDS, the General Services Administration (GSA), Technology Transformation Services (TTS) which includes 18F, etc. They forwarded my resumé to them. I interviewed for GSA and then waited three months to hear they wanted to hire me. And then, three months after that, they told me I’d need to start the background check process. It took something like eight months altogether from the time I interviewed for GSA to the time I started to work for them. |
To illustrate Mises’s point, let’s take a simple example: the manufacture of a pencil. The manager of a pencil-making factory has to make many production decisions, because there are many ways to make a pencil out of its component parts. How does a pencil maker decide how to produce his “final good,” the pencil, out of all the possible “intermediate goods,” the various types of graphite, wood, paint, and other things that go into making it? In a capitalist society, he begins by checking the price catalog, where he discovers that graphite A costs 35 cents per pound, while graphite B costs 37 cents. If either works, his choice is clear. This manager can perform the same price test for all the relevant inputs, in order to arrive, quickly and accurately, at the most rational way to make a pencil. He does not need to understand how all the activities of society add up to an overall economy. Prices allow the pencil makers to quickly set aside numerous procedures for making pencils that would result in functioning pencils, but at the cost of squandering natural or labor resources better employed elsewhere. If given tons of the finest quality Cocobolo or Osage Orange lumber, the pencil makers could undoubtedly make good pencils. But this would be a waste if some other tree, like the humble cedar, provided lumber that worked just as well. Of course, the prices that pencil makers use to make production decisions are not just random numbers. They are expressions of a living market society, characterized by decentralized decision-making, involving large numbers of producers and consumers. Markets place pressure on all producers to get prices right. If it proves possible, for example, to make pencils more cheaply without sacrificing quality by using a new technique, the firm that does so will earn a sizable profit. New information about pencil production possibilities will show up in the system as a lower pencil price. | Each producer can make rational decisions about what and how to produce, only because a struggle for market supremacy forces producers to maximize their revenues and minimize their costs. All of these market-dependent producers absorb information to the best of their abilities, make decisions, and take risks in search of new production possibilities and the corresponding monetary rewards. Socialist planners couldn’t possibly reproduce such a complex system, Mises believed, because they would never have more information than market participants mediated through the price mechanism. Ultimately, prices tell producers which production possibilities have any chance of turning a profit. Without prices, Mises argued, the rational allocation of assets becomes impossible. Fatal Errors What’s striking about Mises’ description of capitalism is that it is already highly algorithmic. In his account, the managers of the pencil factory behave like a computer program. They collect price information about intermediate inputs and then follow a simple rule: choose the cheapest option for each input that does not lengthen production time or lead to an unacceptable reduction in demand. Many socialists responded to Mises’s challenge by accepting his basic premise and then trying to write their own algorithm. In other words, they wanted to show that planners could create a substitute for the price system that could generate enough information to arrive at the correct production decisions for a socialist society. The Polish economist Oskar Lange and the Russian-British economist Abba Lerner were the first to develop this idea. Their proposals, worked out over the course of the 1930s and 1940s, involved socialist planners “feeling” their way towards the right prices through trial and error. For example, planners might set the price of an intermediate good required to make a pencil, and then adjust that price as necessary, until the supply of the final good matched consumer demand. A series of approximations would get closer and closer to the true result, much like a computer calculating pi through a sequence of slight additions or subtractions. |
First there are the text messages. Impersonal, incessant, and devoid of context, they reveal few hints of their purpose. The language is so vague you’d be forgiven for thinking it was spam. “Message from the Probe Group regarding an urgent matter. Please call us.” Then, a deluge of phone calls—up to ten times a day, often after-hours—from an unknown mobile number. Chances are you’re still ignoring them. | Should you choose to return the missed calls, you’ll be greeted by an automated voice and placed in a queue. The identity of your caller remains a mystery. When a human operator at an overseas call center at last assumes the reins—whose identity and place of employment, by the way, remains unknown—you’re asked to disclose your date of birth and home address. |
This friend told me about this thing called Bitcoin that he was really into. He described it as a form of digital money that you could actually hold, unlike PayPal balances which are really just notional numbers in a database and at the end of the day are completely under the control of Paypal. I went on to read the Satoshi Nakamoto white paper and was instantly fascinated, especially because back then there was this narrative that it was going to an uncensorable payment system. That just seemed really exciting. So I bought a very small amount of Bitcoin with cash from some guy at a cafe in Soma. | I didn’t pay much more attention to the space until about 2015 or 2016, when there started to be a lot of interesting non-Bitcoin blockchain projects. Before that, it was basically Bitcoin and then a ton of scams. There was Megacoin and Feathercoin and all these things were just like... obvious scams. It didn't seem like it was going to be an industry so much as a ton of people trying to profit off of it by running their own hustle. But then it became apparent that there were some other uses for blockchain besides just Bitcoin. I think Bitcoin's the biggest, most interesting use-case, but there became a lot of other interesting smaller ones. |
But each of these communities has critical things to contribute. Climate activists can help us avoid the trap of “platform determinism”—that is, the risk of fetishizing the platforms as mythically powerful actors, instead of centering the choices made by the humans who design the platforms. Such activists also bring expertise in movement building: they know how to put pressure on investors and companies, and how to form relationships with policymakers. On the other hand, those who are working on the politics of data, whether from socialist, decolonial, or Indigenous perspectives, have valuable experience in identifying the problems with data appropriation, access, and use, as well as in creating just data frameworks. And technologists can put their knowledge of quantification and design to work building platforms that are genuinely usable and inspiring. The knowledge problem of net zero is difficult but not insurmountable. Solving it the right way will require a group effort. We must build data infrastructures that embody multiple ways of knowing and understanding our world, and that help us advance both ecological and social ends, before corporations conquer this space for themselves. | I wonder what’s taken me so long to pop the question, and why this ritual that’s been performed countless times by so many before me bears down on me like I’m the first woman in history to put a knee to the floor and offer the ring and say the words. She’ll say yes. We talked about it. When we lay beside each other, Aluna gains the power to ease the edge around momentous things, and it’s all in how she finds the right tenor, right whisper, the right circles of pressure to trace along the dips of my neck. I’ve brought all sorts of crises to her before sleep. VR recordings of tragedy filmed that very morning, from hundreds of miles away and through the dissociative lens of a drone’s 360 degree camera. Roaming tent cities trudging across the Sahara that siphon power from abandoned solar panel farms. Ocean slums sprawling off the coasts of Italy, Spain, France, sinking as their fate is decided in boardrooms graced with AC. Sea spray is everywhere it shouldn’t be, and scatters the footage into clouds of discordant pixels—that’s when Aluna pauses our replay, blinks our retinas clear. While the bedroom is at its darkest, we shift to face each other and she tells me what she thinks comes next. Never lessening the blows, but making it all feel approachable, which is exactly how she responded when I asked if we might actually spend the rest of our lives together. And she said yes. |
In the financial industry, investors want firms that use big data and machine learning and artificial intelligence—but do those new tools actually generate better results? That’s a good question. The best way to explore it might be to talk about the role of data. There’s a lot of excitement in the financial industry about the amount of new data that’s being made available. Think about what kind of data might be useful for predicting the price of an oil future. It might be a piece of political news, public announcements from regulators, satellite images of oil refineries to calculate oil reserves. There are tons of different kinds of data out there—pretty much anything you can think of. | Along with new forms of data, there are also new forms of data analysis. The early versions of complex data analysis included looking at the financial statements of publicly traded companies. But now you can parse through the data in those statements in more interesting ways. Back in the day, you might care about how much debt the company has or what its earnings are relative to its price, and you might compare those figures to the broader market. But you were ultimately limited by your capacity to source and process this data. Now you can analyze more variables more systematically across thousands of stocks. You can also do more exotic things like use natural language processing techniques to figure out what the company is saying in its statement that isn’t reflected in its numbers. How did the commentary change from previous earnings reports? What is the tone of the words they use to describe the underlying business? How does this tone compare to words used by its competitors? Even though it’s the same data you had access to before, you have more processing power and better techniques to understand that data. The challenge is that not all of these sources of data and ways to analyze them will be useful for predicting the prices of financial instruments. Many of the new data sets, like satellite imagery, tend to be quite expensive. And they may not add any information more useful than what is already available to market participants from the vast streams of data on prices, companies, employees, and so on. We’re still in the phase where we’re trying to figure out what to do with all the data that’s coming in. And one of the answers might be that most of it is simply not that valuable. |
In other words, the tube sites make money the same way that Facebook does. And the fact that the same companies also own many of the big studios means that they can use the data they collect not only to sell targeted ads but to make their videos even more engaging so that users spend even more time watching them, thus generating even more data. They are creating a vertically integrated data porn empire. | Don’t Fight the Data While a lot of people (most likely you and everyone you know) are consumers of internet porn (i.e., they watch it but don’t pay for it), a tiny fraction of those people are customers. Customers pay for porn, typically by clicking an ad on a tube site, going to a specific content site (often owned by MindGeek), and entering their credit card information. |
The Flexner Report recommended that medical education develop an evidence-based curriculum. Under its influence, medicine was subjected to the rigors of peer review and the scientific method for the first time. Residency programs were established, uniting the university and the hospital, and placing apprenticeship within the academy. Medical teachers were expected to be proponents of the latest and most credible research. State licensure was tied to education, introducing some semblance of standards. | The recommendations in the Flexner Report also formed the basis of what we today understand as the social contract between the medical profession and the people whom it serves. Patients are entitled to competence, altruism, morality, integrity, accountability, transparency, objectivity, and promotion of the public good. In return, physicians are entitled to trust, autonomy, self-regulation, a funded healthcare system, inclusion in public policy, monopoly, and prestige. |
Don’t Leave Me Whether or not we acknowledge it, our phones and laptops have access to us that’s just as close and unfiltered as a lover’s. Closer, even. What our relationships to them enact, and perhaps therefore amount to, is an intimacy whose loss would leave us feeling humiliatingly, and comically, abandoned and betrayed. These films put the affect back into the otherwise merely technical description of our devices as forms of enhanced connectivity. | Most eerily perhaps, they also propose that fears of abandonment and betrayal are inevitable in our current technological context: not because our computers are less complicated than us, but because their networks of images and sounds so greatly exceed our cognitive capacities and individual contributions. The internet will always know us better than we know the internet. It will also never depend as much on our individual existence as we do on its presence, even though—when we sit down to open a browser—it is so extraordinarily responsive to our every half-typed wish. We might know this in the abstract, but we will still mistake our iPhones for our lovers as long as we rely on them the way we do—even if we keep insisting that they’re not our type. |
If your vision of the progress of life includes a long hiatus for your twenties, that’s great for tech firms. If you stay all night at Google, that’s great for Google. They can bring you the barber. They can bring you the restaurant. You can have your love life at the firm. Have multiple partners, they don’t care. As long as you are super flexible and committed to the firm. | Because you mentioned age discrimination, I wonder if you could speak to the prevalence of sexism and sexual harassment in the tech industry. There’s been a lot of media coverage recently about a spate of recent scandals—but sexism is obviously something that’s been a core feature of Silicon Valley for awhile. |
Organizing the Lab Rats The coalition decided to focus on three goals. Articulating these was crucial to keeping the various political and ideological factions within the coalition from splintering over other issues. First, it called for an immediate moratorium on streetlight acquisition, installation, and operation. This would end the immediate threats that the streetlights posed while organizers lobbied for more radical changes to public policy. | Those changes to public policy were the coalition’s second goal. Organizers sought public participation in the creation of legally enforceable policies over all surveillance technologies used by the city, not just the streetlights. A wide range of technologies beyond the streetlights—known and unknown—made up the surveillance dragnet of San Diego. Worse, surveillance tech could come to the city through donations to the Police Officer Association or through free trials like those offered by facial recognition company Clearview AI. This meant that the City Council did not always have to approve the technologies and the public had no way to see what was in use, unless someone peppered city departments with scattershot public records requests. Council members had not even discussed the Smart Streetlights Program publicly before they signed off on it; that was two years before communities realized a mass surveillance technology was even on the table. Communities needed a legal infrastructure that would alert them to surveillance technologies before they were approved. Rather than playing whack-a-mole to stop individual technologies, the coalition sought transparency, oversight, and City Council authority over all city surveillance technology, existing and future. |
The law says that federal contractors can’t manage contractors, so a federal employee has to be the contracting officer representative who manages that contract. So you have people whose job used to be technologist or system administrators, but now they are federal employees who are middle management and as a result all they do is manage those acquisition contracts. We’ve seen this pattern a lot. How does that impact the maintainability of software? Theoretically, it should improve it because every single person is replaceable. Practically, it presents interesting challenges. | For example, VB6 was decommissioned in 2012, I think, but there are many VB6 systems across government because VB6 presents a fairly low barrier to entry for development. There’s a whole bunch of these lying around. Say I have a contract with the vendor and that contract says I need VB6 experts to manage these VB6 systems. Awesome. The contractor gets to move those experts around, and they’re supposed to be interchangeable. But what you learn from managing technology as opposed to doing technology is that knowledge management is really hard to do. Organizational knowledge dissipates even if we get a new VB6 expert, because that expert has never seen this system and the person who was an expert in this system gets moved to another project. |
There are solutions, though, I should note. Give us some optimism. So let’s be positive for a moment. It’s easy to point fingers at all of the terrible stuff that happens with capitalism, and we can get stuck in that kind of negative feedback loop really easily. We have been thinking about different types of solutions. One of them would be for the example of 23andMe. For the people who gave you their genomic information for you to create this company, and you’re selling that information to Big Pharma, and then they’re developing drugs and selling it back to those same people for money—why not give them equity? Why not give them a stake? I feel like that’s a more sustainable model, financially. And would be a new cool kind of stockholder, shareholder-based benefit-sharing model. That’s the way we should be thinking about circular economies. What is just? And when we say equitable and you say equity, what do you mean? When I say equity, I’m talking about financial institutions and structures that allow us to move past the dollar sign and buy back our land. Something like stock shares in 23andMe would enable that, or trusts where that money can go to avoid corruption and cronyism. In one of your papers, you talk about the “illusion of inclusion” and the NIH “All of Us” project. One of the common arguments about data is that if data is open instead of proprietary, it will benefit the public instead of just companies. But, in that paper, you talk about how we need to rethink what public benefit really means. Could you elaborate on that idea? First, I would say, whose benefit? Whose greater good? I think that’s the bigger question. If we look at that historically, we know very well whose greater good we’re prioritizing over others, and we know that that’s quite hierarchical, right? Couching it historically, what does it mean when the federal government uses your tax dollars to pay for a large-scale initiative to sequence one million people’s genomes in America, and that data is open? Does that mean that you, as an average US citizen, are going to benefit directly from that? You know, I think there are probably better uses of taxpayer dollars, maybe breaking up this multibillion dollar project into smaller grants and serving other basic scientific questions. | But, in the meantime, the reason why this is problematic is because once that data hits the open market, it gets aggregated by these companies like Regeneron and others and is used in the development of pharmaceutical drugs. Meanwhile, the communities who graciously contributed their genomes to this, based on the false pretense of this having an impact on their health, will basically receive nothing. Think about the history of the Human Genome Project from 2001. If we think about the actual impact of that project on health, it’s been kind of negligible with respect to common complex disease. What do I mean? The number one and number two cause of death in America are heart disease and cancer. If you look at the rates of the number of people in America that have those disorders, it’s pretty consistent. And because that’s the number one and number two cause of death, we have to look at it and say, “Have large scale genome sequencing projects changed these death rates?” Common complex diseases are fucking complex. There are multiple genes at play with multiple mutations and many, many other things. And there are also lifestyle choices and stress and sleep and a number of other things that are independent of the innateness of these conditions. But if I keep selling you innateness, and keep trying to say we need to sequence more people’s genomes because it will allow us to reduce the widening gap in health disparities when we know that there are better uses of federal money… I don’t know. There’s a hundred different things that we could be using to do that. That’s not to say that genomic technology hasn’t resulted in some pretty incredible stuff—look at this Moderna vaccine. It’s pretty remarkable, but it’s a targeted thing. It is a piece of drug development. It is innovative, but it has a particular goal. With respect to common complex disease, I would say that it’s been almost an abject failure. For things like Mendelian disease, though, or identifying the cause of Kabuki syndrome and Miller syndrome and these other really, really rare things, genomic technology has been more successful. |
We also created a lookup tool and pledge map back in 2014. You can type in an address, see if there’s been an eviction there, and then pledge to not rent from that landlord. The map was pulling in public eviction data from the San Francisco Rent Board. The obvious next step was to connect other data to it, like parcel data and corporate ownership data. That turned out to be a lot harder than we realized, so it’s taken us some time. But that’s what the EvictorBook website does: it brings together a lot of those tools that we’ve already been using for years and makes it easier to see the landlords, LLC networks, and eviction histories of different buildings in San Francisco. You can enter an address or a landlord’s name, and we’ll show you a profile page with evictions connected to that entity. | Search EvictorBook by entering an address, landlord, or neighborhood name. How did you all build it? Erin: The work began in early 2019 in collaboration with other member organizations of the San Francisco Anti-Displacement Coalition (SFADC), and in collaboration with the Mapping Action Collective (MAC) in Portland, Oregon. Since then, we’ve done some workshops with different tenant groups to figure out what needs and questions they had given the new Covid conditions, and we’ve been really lucky to get new frontend people involved in development work. Right now, we’re making the site more user-friendly and doing more user testing in LA and Oakland. |
The scales possible with exponential growth are as incomprehensible as they are impossible. E. coli’s potential for exponential doubling is realized only in the highly controlled environment of a researcher’s test tube, where food is abundant and no other species are in the way. Even then, the time that bacteria can be expected to grow exponentially is inevitably limited—growth slows once nutrients start running low or the cells are too crowded to keep dividing further. | Biological growth scales to fit its context. There are no gas giants full of identical E. coli. More than one hundred years ago, D’Arcy Wentworth Thompson published On Growth and Form, a treatise on the mathematics of biological growth. In it, he summarizes this central maxim for biological scaling: “The effect of scale depends not on a thing in itself but in relation to its whole environment or milieu; it is in conformity with the thing’s ‘place in Nature,’ its field of action and reaction in the Universe.” Biology’s ability to grow in relation to its environment—to grow ecologically rather than exponentially—is at the heart of what inspires biologists, engineers, and designers to work with organisms to build a new kind of technology. What if our technologies could grow and heal like living things? What if concrete could be set by the metabolism of microbes, any cracks repaired in situ? What if factories were replaced with farms, growing new things that could be recycled back into the soil? What if, as MIT Media Lab director Joi Ito has proposed and further developed in a recent manifesto, “the role of science and technology [in the next hundred years] will be about becoming part of nature rather than trying to control it”? People don’t multiply like E. coli, but put a few people together and you’ll quickly end up with more. Over the scale of millennia we’ve ended up with 7,589,052,176—the mind-boggling number of people on earth today, whose ability to satisfy basic needs for shelter, food, and warmth are always dependent on the availability and distribution of resources. |
If there’s one thought on the subject of mental health and young people on the internet that you’d like our readers to carry out into the world, what would it be? Again, I would push back against the “real-name web.” Lots of people have made the argument before me but it still stands. The kids I’ve spoken to feel so much safer on social media if they use a pseudonym. Pseudonymity has so many benefits for them and, while it will always carry risks, there’s a wealth of evidence telling us that the benefits outweigh the harms. We need to listen to the kids. We need to believe what they’re saying, and create a digital world that doesn’t alienate their ideas. | Ex nihilo is one of those concepts that makes you immediately suspicious. When someone claims to have made something out of nothing, there is almost always an inconvenient history that they intend to eclipse. The fantasy that before us there was nothing has a great deal of ideological power. The creator ex nihilo gets to claim some faint emanation of divine power—the wealth creator and the job creator are treated as distant cousins to the capital-C Creator. The same is true for the idea of terra nullius. When tech came to a stretch of Northern California along the San Francisco Bay, it reframed the world it found as a bunch of apricot groves, a rail line, and not much else. This kind of pioneering story abounds in Silicon Valley. It’s not just the architecture, which seems to pretend that nothing was there before whatever spaceship of an office park landed in a given lot. Rather, it’s the environmentalist facade of an industry whose dirty beginnings—above all, in the known carcinogen, trichloroethylene, long used to clean semiconductors—have dotted Santa Clara County with a record number of Superfund sites. It’s the way, looking at the names of the wealthiest people in the Valley, you get the sense of wealth that’s been created by this generation rather than inherited from previous ones. Of course, the Zuckerberg-Chan dynasty may one day amuse the Bay Area’s society columnists with their coke-fueled antics at the annual Cotillion Debutante ball. Maybe X AE A-XII Musk will one day run for governor of California like so many scions before him who were born on third base and think they hit a triple. But for now, this wealth feels new. And the newness is part of its allure, its legitimacy. Stories abound about how Silicon Valley’s newly rich don’t know how to spend their money, or how they spend it on absurd things. There is something reassuring about that framing, suggesting as it does that these protagonists are new to wealth and privilege, that wealth is foreign to them. |
Image courtesy of Os Keyes. When you run into a system that uses AGR, it takes a photograph (or video) of you, and then looks at your bone structure, skin texture, and facial shape. It looks at where (and how prominent) your cheekbones are, or your jawline, or your eyebrows. It doesn’t need to notify you to do this: it’s a camera. You may not even be aware of it. But, as it works out the precise points of similarity and difference between the features of your face and those of a template, it classifies your face as “male” or “female.” This label is then fed to a system that logs your gender, tracks it, and uses it to inform the ads that an interactive billboard shows you or whether you can enter a particular gendered space (like a bathroom or a changing room). | “Automatic detection and aggregation of demographics and behavior of people,” a patent for a system that includes AGR. Google Patents. There’s only one small problem: inferring gender from facial features is complete bullshit. You can’t actually tell someone’s gender from their physical appearance. If you try, you’ll just end up hurting trans and gender non-conforming people when we invariably don’t stack up to your normative idea of what gender “looks like.” Researchers such as myself and Morgan Scheuerman have critiqued the technology for precisely this reason, and Morgan’s interviews with trans people about AGR reveal an entirely justified sense of foreboding about it. Whether you’re using cheekbone structure or forehead shape, taking a physiological view of gender is going to produce unfair outcomes for trans people — particularly when (as is the case with every system I’ve encountered) your models only produce binary labels of “male” and “female.” The consequences are pretty obvious, given the deployment contexts. If you have a system that is biased against trans people and you integrate it into bathrooms and changing rooms, you’ve produced an algorithmic bathroom bill. If you have a situation that simply cannot include non-binary people, and you integrate it into bathrooms and changing rooms, you’ve produced an algorithmic bathroom bill. A True Transsexual So AGR clearly fails to measure gender. But why do I say that it reshapes gender? Because all technology that implicates gender, alters it; more generally, all technology that measures a thing alters it simply by measuring it. And while we can’t know all of the ramifications of a relatively new development like AGR yet, there are a ton of places where we can see the kind of thing I’m talking about. A prominent example can be found in the work of Harry Benjamin, an endocrinologist who was one of the pioneers of trans medicine. |
Adult Only, Adult Always Summer and I began fundraising for our platform, which we called V Union, in late 2020. The following January, we received a message from Donia Love, the founder and CEO of PeepMe, another nascent platform for adult creators. The PeepMe team was a mix of people already involved in sex work and sex-work advocacy, along with allies in tech development and cyber security who were willing to put their careers on the line to support the enterprise. Their mission was everything we were aspiring to do: “adult only, adult always.” So V Union decided to partner with PeepMe. | Our aims for the platform are ambitious. In addition to creating a worker-owned cooperative, in which workers have democratic power over the running of the platform and a share in profits, we also want to give individual workers ninety percent of the revenue they generate. Additionally, we intend to donate ten percent of our profits to sex-worker-led community organizations. One caveat we are applying to cooperative ownership is the site cannot be sold off or lose its identity as an adult content platform. We believe this is the only way to create a just alternative to the exploitative business practices of the online adult industry. |
There’s clearly a whole lot of thinking and writing that will have to be done to map out these problems, and to develop potential solutions. And it’s reasonable to expect that a lot of that thinking and writing will be done by people working at universities and think tanks. But that raises a question: many of the scholars and the institutes exploring these issues have close links to, and in some cases receive substantial funding from, the tech industry. In mid-2017 we saw a stark reminder of this when the think tank New America expelled its Open Markets group for being overly critical of Google, one of its main funders. | How do you manage these concerns at Berkman, and how do you think others should manage them? I think you have to depend on the professional tenets of people at think tanks and universities. There are astroturf organizations that are designed to present and launder industry views. There are other organizations that purport to want to get it right, create a culture of wanting to get it right, and hire serious people with relevant training. Still, will they bite the hand that feeds them? It’s a good incentives question. But I see it as a real puzzle rather than as a cause for castigation. |
Or at least more than what we have already. Exactly, yeah. It’s already a social credit system in Detroit, but we don't want to add more and more technologies to that and exacerbate the violence and marginalization that residents are already feeling. And we don't want this rolled out all across the globe as a way to contain and control Black and brown people and Indigenous people and poor white people. Surveillance is not safety. | The hardest part for us is that law enforcement, government institutions, and too many people don't seem to see an alternative, especially in areas that have been deemed dangerous. But everyone needs to understand that most crime in our neighborhoods is rooted in poverty and disinvestment and racial violence. We need co-liberators; we don't need any more allies. We need folks to really feel their liberation is tied up in our liberation, you know? You founded the Library Freedom Project and you're also a core contributor to the Tor Project. Which came first for you? Library Freedom Project (LFP) came first. I was working at a library outside of Boston in Watertown, Massachusetts. Through a confluence of forces, including living through the Boston Marathon bombing and the police militarization around that, plus the Snowden revelations just a couple of months later, plus the nascent but growing Black Lives Matter movement... all of it made me think about technology and policing and privacy in a way that I never had before. I started LFP in the summer of 2014. At first it just involved teaching people about privacy in my library, doing some introductory community events. When there turned out to be a lot of interest, I cold emailed the people at Tor saying, “Hey, I'm a librarian working on privacy, and I'm teaching people about Tor. Can I be in touch with your outreach people?” And they responded, “We don't have any outreach people. Do you want to be our outreach people?” That’s how the Tor connection happened. |
Khakis and Dad Jeans Enter what may be the world’s most unlikely group of rock stars: seventeen middle-aged white guys, dressed in khakis and dad jeans, all obsessed with management. The now-legendary authors of what came to be called the Agile Manifesto gathered at Utah’s Snowbird ski resort in February 2001 to hammer out a new vision for the software development process. This wasn’t their first meeting; they’d been gathering in various configurations to discuss software development for some time, though, until the 2001 meeting, they hadn’t come up with much to show for it. This time was different. Scrawled on a whiteboard was the Agile Manifesto, a set of values that, in the following years, would become nearly ubiquitous in the management of programmers, from fledgling startups to huge corporations. It’s pleasantly concise: We are uncovering better ways of developing software by doing it and helping others do it.Through this work we have come to value:Individuals and interactions over processes and toolsWorking software over comprehensive documentationCustomer collaboration over contract negotiationResponding to change over following a planThat is, while there is value in the items on the right, we value the items on the left more. | The manifesto, supplemented by twelve additional principles, targeted the professional frustrations that engineers described. Waterfall assumed that a software application’s requirements would be stable, and that slowdowns and logjams were the result of deviating from management’s careful plan. Agile tossed out these high-level roadmaps, emphasizing instead the need to make decisions on the fly. This way, software developers themselves could change their approach as requirements or technology changed. They could focus on building software, rather than on paperwork and documentation. And they could eliminate the need for endless meetings. It’s an interesting document. Given the shortage of qualified developers, technology professionals might have been expected to demand concessions of more immediate material benefit—say, a union, or ownership of their intellectual property. Instead, they demanded a workplace configuration that would allow them to do better, more efficient work. Indeed, as writer Michael Eby points out, this revolt against management is distinct from some preceding expressions of workplace discontent: rather than demand material improvements, tech workers created “a new ‘spirit,’ based on cultures, principles, assumptions, hierarchies, and ethics that absorbed the complaints of the artistic critique.” That is, the manifesto directly attacked the bureaucracy, infantilization, and sense of futility that developers deplored. Developers weren’t demanding better pay; they were demanding to be treated as different people. |
But we can’t simply abandon social media. It is too dominant, too ubiquitous, and, occasionally, too useful. For instance, its use in connecting friends and families in a time of mass migration and refugee resettlement, driven by climate catastrophe and war, remains an essential lifeline. Rather, we should aim to loosen our dependencies on the corporate platforms while also rebuilding our autonomy by developing collectively managed media infrastructures—ones that can sustain a new movement of movements and resurrect the old Zapatista dream of an “intercontinental network of alternative communication.” It’s no longer a matter of choice, but necessity. | Retros: Intro There is a lot of labor that goes into founding and running a magazine, and few if any guides to help one understand what choices matter and how to consider them. Many times through our growing pains as a DIY publishing project, we talked about the idea of having the equivalent of a “developer blog” to share what we learned, as we evolved from only sort of knowing what we were doing to finally figuring a few things out. We never got around to it, but better to do it retrospectively than not at all. |
A year after her near-crash on the freeway, Smith testified about her experience to the Nevada Assembly Committee on Commerce and Labor. Her testimony sparked a wave of national media coverage on SIDs. Calling them “kill switches” and invoking “Big Brother,” reporters from the New York Times and Mother Jones highlighted the cruelty of lenders who would disable a car after just a day without payment. Even the Daily Mirror proclaimed, “Heartless creditors LOCKED T. Candice Smith's steering wheel and stopped her engine as she drove down a busy Las Vegas highway.” Smith’s testimony was crucial for exposing SIDs to people outside of the auto finance industry, which had already been using them for more than a decade. But her experience was somewhat unique: she was fortunate enough to be able to make the majority of her payments on time, and she experienced an absolutely indefensible near-death collision because of her SID. For these reasons, the media was particularly sympathetic to her. But there are over two million drivers with SIDs in their cars today, and most of them are not in Smith’s situation. They can’t always afford to keep up with their payments, and their experience with SIDs aren’t nearly as dramatic. Their stories rarely make the headlines, yet these are the people we need to focus on to understand how SIDs really work, and the insidious problem they represent. SIDs are not just a potential safety hazard—they are also ruthless tools of financial extraction. Lenders prey upon an increasing number of vulnerable borrowers who can’t pay back a loan, but who can be squeezed for irregular payments under the coercion of a device that can shut off their car. Lenders see themselves as providing a utility like a phone company, but if anything they’re more like the mob. As one dealer remarked, “It’s amazing how people manage to pay when they know their car won’t start.” There’s an App for That SIDs debuted in the world of auto lending in the late 1990s under the name On Time, offered by a company called Payment Protection Systems. Mel Farr, a running back for the Detroit Lions who became a well-known businessman, helped make On Time devices famous, using them in thousands of his business’s auto loans. At its peak, his group of fourteen car dealerships grossed nearly $600 million annually. Farr’s business was divisive: he was praised by then-president Bill Clinton for building the nation’s largest Black-owned business and “[bringing cars] to every community in this country,” but also criticized by others, like consumer protection activist Ralph Nader, for charging incredibly high interest rates—even on cars with the On Time devices. | At the time, the devices followed a simple schematic. There was a small computer, about the size of a billfold wallet, connected to a switch between the ignition and the starter motor, with a voltage monitor on the engine, and small keypad. The computer was preprogrammed with a schedule and a list of numeric codes, which borrowers would receive from the lender when they made their weekly or bi-weekly payments. If they didn’t enter an acceptable code, either a light would flash or a sound would play, signalling that the car would soon be disabled. The voltage monitor on the engine was supposed to ensure that the car wasn’t in motion before the computer switched off the starter. In August 1999, On Time’s creator, Frank Simon, estimated there were about 15,000 to 20,000 of these devices on the road. |
In the midst of this, Com-Pat’s telephone number got misprinted in the telephone directory, which severely damaged the business until the mistake could be corrected. Worse yet, one of the major newspapers that carried Com-Pat’s ads suddenly decided to change its advertising policy and drop them. So imagine Joan’s horrified surprise when she entered the London Tube one day to see ads for a computer dating service plastered on the train. A new company called Dateline had burst on the scene—and computer dating was about to change dramatically. | John Patterson made a big splash with Dateline. He had several advantages: he both learned from his predecessors’ mistakes and benefited from their innovations. By the time Dateline came along, companies like Com-Pat had already softened the resistance of advertisers to the idea of computer dating and helped sell the public on the concept. |
“We have a policy of not mentioning names [in our online organizing spaces]… We’re trying to be impersonal, talk about systems and structures. Individuals are only actors in that.” — Engineer at a software development firm “I saw how important it was to build community in general. The place we can do it is the place we can spend most of our time. Since we don’t have a local community [due to the pandemic,] I wanted to dedicate my time to the one I had to be involved with. I felt more motivated at work, showing up for it, and getting to get to know people in this way as a personal thing.” — Employee at a non-profit organization For organizers, setting the tone for conversations became key to safeguarding the wellbeing of the collective. They tried to strike a balance between allowing people to vent about working conditions while also trying to foster an uplifting, hopeful tone. | Finding Abundance in Moving Together The rise of remote work forced organizers to broaden their definition of digital safety. While technical considerations and individual behavior still mattered—using the right encryption, promoting the right privacy practices—their focus shifted to a more collective and more social understanding of digital safety. Building trust online required creating spaces where people felt safe together, and creating that feeling required, above all, community management: “It was more about a sense of trust with other people that contributed to a feeling of safety than the platform.” — Employee at a non-profit organization While white-collar tech workers are not working from home as much as they were during the height of the pandemic, the patterns of their professional life have been permanently altered. Remote and hybrid work are here to stay. This means that organizers will have to become adept at orchestrating collective action campaigns within distributed teams that may afford little opportunity for direct interaction. The tactics developed during the pandemic will be critical for this work, as organizers seek to build on successful campaigns at places like The New York Times and Kickstarter to push deeper into the sector. An expanded notion of digital safety, one that emphasizes the collective and the social, will play an important role. To borrow a phrase from one organizer, true safety means finding abundance in moving together. |
Virtually everyone who has worn an ankle monitor has had a technological issue with their device, compounding the psychological stress. Battery life is a constant concern. Many parole officers believe that wearers let their monitors die in order to travel more freely, so they view dead batteries with suspicion. Thus wearers are rightfully concerned that if their monitor batteries die they can be thrown in jail or sent back to prison. Having to charge a device for two hours at a time via a two-foot charging cable only exacerbates the feeling of being chained. Constantly having to find access to power can cause significant anxiety. | There is also a literal cost to being monitored. Individuals wearing ankle monitors are often required to pay a $5 to $20 fee per day of use, despite the fact that the retail price for a GPS monitor is only between $150 and $250. LCA, the primary provider of electronic monitoring technology for the state of the California, calls these programs “self-funding,” and claims they are “typically provided at no cost to the county.” When the monitors are used on juveniles, parents and family members end up paying the costs. Some counties in California have opted to create indigent funds for those truly unable to pay for the monitoring—but in at least one county, the determination of whether a wearer has the ability to pay is made by the monitoring company itself. |
In fact, an entity that operates in this manner not only exists, but is present across the US, in forty-eight states and covering a majority of its landmass. These are the rural electric cooperatives (RECs): nonprofit, local, democratic institutions that collectively control 42 percent of the country’s power distribution system and deliver electricity to over 40 million people. RECs were born out of the New Deal in the 1930s, as part of a broader federal push for rural electrification. Private utilities didn’t see any profit in connecting poor and sparsely populated regions, which resulted in large swaths of the country remaining without power. So the Roosevelt Administration stepped in, establishing the Rural Electrification Administration, which worked with rural residents to form cooperatives. These cooperatives were encouraged to apply for government loans and grants so they could build much-needed power infrastructure themselves. The initiative turned out to be wildly successful. Within twenty years, rural America went from having electrification rates of roughly 10 percent to matching the 90 percent and higher rates of urban areas. But RECs weren’t just designed to bring power to rural communities. They were also designed to be owned and governed by those communities, which to this day are largely poor and working-class. RECs are run by boards elected by the membership, and their revenues come almost entirely from this membership paying their electric bills to the co-op. They are relatively small-scale and localized, with members being neighbors, relatives, friends, and coworkers. They are structured as 501(c)(12) not-for-profit cooperative organizations, which requires them to provide electricity at “cost of service.” Moreover, they must distribute excess revenues back to the membership. This works more like a refund than a dividend: if there is money left over after covering the cost of service, a portion of a member’s electric bill is returned. RECs aren’t allowed to amass profits like IOUs, thus eliminating the temptation to let equipment fall into disrepair for the sake of shareholder returns. | RECs are a model of what democratic control of infrastructure can look like. Of course, the reality is more mixed. The history of RECs over the past century is littered with cases of mismanagement, corruption, and antidemocratic practices. As the scholar Abby Spinak explores in her 2014 doctoral dissertation, many RECs have stagnated, with relatively few members taking an active interest in cooperative management, thus enabling technocratic bureaucracies to take root. Nonetheless, RECs have a unique potential for allowing ratepayers to overcome these sorts of problems, and are ultimately an indispensable tool for forging a democratic path to decarbonization. The premise of the Green New Deal is that we can decarbonize as we democratize—that a zero-carbon world can also be a fairer one. But what are the actual building blocks of the Green New Deal? For an answer, we can turn to the old New Deal, and the cooperatives it created. |
Selling sex online provides several advantages: a better opportunity to screen clients, a stronger ability to negotiate, and a lot more independence. Online marketplaces give workers the ability to craft ads on their own terms, clearly outlining their services, prices, and boundaries long before a client may even acquire their phone number. And taking payments online—especially through PayPal, where all that’s needed to send money is an email address—is both easy and safer for anyone who might want to avoid providing their bank account information. | “Online advertising provides a level of safety to those in the sex industry that many other spaces do not,” explains Kate D’Adamo from the Sex Workers Project at the Urban Justice Centre. Not everyone can make use of these tools, of course. Sociologist Elizabeth Bernstein describes the kind of workers who have benefited most from online platforms as “overwhelmingly white, native-born and class-privileged women.” Still, for many sex workers, the impact of the internet is significant—and growing. |
Sextech, like porn, monetizes the orgasm. For Reich, however, the orgasm wasn’t a commodity—it was a weapon. It held the power to demolish the old world and build a new one in its place. It promised not only sexual liberation, but the liberation of humanity as a whole. Sextech doesn’t begin to approach the utopian intensity of Reichian revolution. But in the age of Trump, sexual reformism might be the best we can get. To give OMGYes and its sextech peers the benefit of the doubt, they are drawing attention to the pressing need for new modes of sex education. This is important work, especially as we enter a regressive political moment when the technologies and social movements that made sex for pleasure possible are under threat. | Legalized abortion is at the top of Trump’s hit list. Non-discriminatory policies protecting LGTBQ people are also vulnerable. If sextech raises its ambitions to not only help users overcome the barriers to their bliss, but also get them to think about the conditions that created those barriers in the first place, and have a stake in perpetuating them, it may start to fulfill the promises in its mission statements. Then sextech’s “pleasure education” might live up to that very second-wave feminist slogan, the personal is political. |
Where Hollywood’s sci-fi futurism and leading tech pundits lead us astray, however, socialist feminism can lend invaluable insight, inoculating us against techno-capitalism’s self-flattering claims. The socialist feminist tradition is a powerful resource because it’s centrally concerned with what work is—and in particular how capitalism lives and grows by concealing certain kinds of work, refusing to pay for it, and pretending it’s not, in fact, work at all. | That women have special insight into technology shouldn’t come as a surprise: after all, they have been sold the promise of liberation through labor-saving devices since the dawn of mass consumerism, and this applies to kitchen appliances in particular. (It’s a short and rather sad leap from self-cleaning ovens to self-cooking ones.) Despite this, they have seen their workloads multiply, not diminish. |
We’ve published a range of pieces, and some do point toward possible solutions. But, as editors, I hope we’ve discouraged our writers from claiming any easy victories. If you’re going to propose a solution, we want it to feel earned. We want it to feel specified. AB: The solution is often the least interesting part. One of the maxims that I use is that you have to start where your readers are. And our readers don’t even necessarily know what the problem is. Why are the genealogies of machine learning datasets a problem, for example? What are the dimensions of the problem? Why have other solutions fallen short? We want as fine-grained a mapping of the terrain as possible. JF: I know we’ve sometimes discussed the tendency of certain pieces to seek a solution where there isn’t an easy one, a piece that can be really illuminating in its analysis of a problem, before ending with a paragraph that basically concludes: “The solution is that we have to fundamentally overhaul everything about our society, economics, and culture.” It’s like: sure, maybe that is a solution, but it’s an unsatisfying way to end a piece. | MW: It’s true, as Ben says, that we don’t occupy a traditional literary scene. But we do have a certain literary streak to us, and that’s expressed in our shared desire to create a space for writing about technology that doesn’t demand simple solutions, whether it’s swapping algorithm A for algorithm B or, you know, full communism. |
In recent years, that insight has led the government to encourage more siloed digital nooks, such as WeChat’s 500-person-max group chats, and to more heavily police the open digital spaces where enough strangers can congregate to become a problem. The type of digital spaces, in other words, that bullet comments enable. The government’s embrace of Bilibili, then, may just be a way of keeping it on a tight leash. In 2017, Bilibili’s app was temporarily pulled from mobile stores as part of a larger crackdown on “unproductive culture,” which also saw hip-hop artists with tattoos banned from television. This warning shot was fired despite the politically apathetic, if not downright patriotic tone of most comments on the site: Bilibili users are much more likely to bicker over whether Marvel or DC is the superior provider of superhero movies than to poke fun at Party leadership. But it is the very strength of its capacity for re nao that makes bullet comments a politically risky format: a crowd, no matter how anime-obsessed, just needs the right spark to become a mob. | In October 2017, the 5th Plenary of the 18th National Congress of the Communist Party of China announced the future direction of Chinese economic development: Fully develop the basic role of consumption in economic growth; focus on expanding household consumption; steer consumption in intelligent, green, healthy, and safe directions; and focus on expanding service consumption to promote the upgrading of consumption. It was easy to miss the point amidst all the droning officialese. Small wonder, then, that government bodies and the business community simplified this directive into four easy-to-remember characters: 消费升级 (xiaofeishengji), “consumption upgrade.” In the past three years, the phrase has appeared in the policy documents of eleven provinces and autonomous regions, a dozen cities, the Ministry of Commerce (MoC), and China’s top executive body, the State Council. The aim of the “consumption upgrade” is simple: to transition from an investment-driven economy powered by industry to a consumption-driven economy sustained by a consumer society. Promoting domestic consumption isn’t a new priority: the Chinese Communist Party has pursued this goal since at least the late 1990s, when the 1997 Asian financial crisis alerted the country’s leadership to the perils of an export-driven economy. But it has acquired more urgency in recent years, as investment-driven growth has slowed. Moreover, the internet offers a powerful new tool for building a consumer society, one that the government believes will play a crucial role in China’s economic transition. The MoC, at a press conference in late 2017, identified the internet as a “channel” that would make the “consumption upgrade” possible—specifically, by linking populations in rural and remote regions with high-quality goods at low cost, ensuring that economic development reaches the whole population. As a result, you might expect the Chinese Communist Party to love Pinduoduo (PDD), the group-buying app founded in 2015. PDD works like Groupon, but rather than events and services, the app’s listings are primarily consumer items, from fresh fruit to diapers to home electronics. According to a 2018 study by venture capital firm GGV Capital, around 60 percent of PDD’s users come from less developed regions of China, and the deals are considerable: 399 RMB ($60) for a LED TV, for example. |
In China, by contrast, there is less room for such activity. For one, it’s unclear how credit legislation could be used to mount comparable efforts. And, although a few Chinese consumer protection organizations have become attuned to the ways that tech firms’ use of customer data can lead to privacy abuses, they have not taken up the issue of blacklists because the practice is treated as socially acceptable in China. Some avenues for contesting cases where people believe they are wrongfully blacklisted do exist, but they are unlikely to be widely known. In places like Shanghai and Hubei, local social credit regulations lay out steps for filing “objection applications” where individuals have found credit information to be incorrect or omitted. Yet, given the scope of what counts as “credit information,” it may be difficult for non-experts to understand and make a case for why their records should be modified. Moreover, it’s unclear if the punishments for being blacklisted leave individuals enough room to redeem themselves, or if the constraints are so stringent that they create insurmountable obstacles to clearing one’s name. | The current state of the social credit system is far less sophisticated than its portrayal in the foreign press. But if the scope of what can count as blacklist data widens, and if the tech sector takes an even more pervasive “searchlight” approach to seamlessly melding these data into their core offerings, the system could move much closer to the dystopian picture that appears in the media. In particular, if China embraces the marketization of blacklist data—so that data is bought and sold, like in the US—information about individuals would become even harder to track and contest. Over the next few years, the Chinese government will continue to tinker with implementing the social credit system. People who are wholly unaffected by blacklists may view them favorably, as proof that the government is proactively combating the laolai phenomenon. Yet there needs to be a critical analysis of the social credit system that centers the perspectives of those who are most directly affected. We need to hear from the laolai themselves to understand what the unforeseen consequences of this vast policy project may be. Only then can we begin to see what the social credit system is actually achieving—and at what cost. |
Carceral technologies are racist because the institutions that develop and use them are intended to manage populations in a country that has a white supremacist inheritance. These technologies are not incidentally racist. They are racist because they're doing the work of policing—which, in this country, is a racist job. There has been a lot of work devoted to proving that particular algorithms are racially biased. And that's well and good. But there was no question that these algorithms were ever not going to be racist. What would a not-racist predictive policing program look like? You would have to reimagine prediction. You would have to reimagine policing. You would have to reimagine the history of computation. You would have to reimagine the racial configuration of neighborhoods. You would have to reimagine a lot of things in order to arrive at even the slightest possibility of a not-racist predictive policing system, or a not-racist facial recognition system. So yes, they're racist. There's no question that they're racist. But the reason that they’re racist is because they're used to enact modes of racialized violence. | In recent years, scholarly communities have focused more attention on issues of fairness, accountability, and transparency in machine learning. We’ve also seen a broader conversation emerge around “AI ethics.” What’s your view of these discourses? A lot of these research communities begin with methodologies from STS (Science and Technology Studies) and adjacent fields, where the emphasis is on trying to understand sociotechnical systems. But they often have an inability to apply that analysis to themselves—to interrogate the role that academia and techno-criticism play in the vast sociotechnical assemblage that buttresses the conception, design, and implementation of carceral technologies. It’s not due to a lack of imagination that these scholarly communities have continuously circled the drain on questions such as the presence of racial bias in particular systems—this is a political arrangement. It’s a structural condition of how the grants that fund their work are allocated, and the relationships they have to industry and to government institutions. For decades, research questions have been staged to these scholarly communities in very particular ways by carceral institutions. There is a given-ness to the problems that these researchers are failing to interrogate. For instance, it's no accident that for years everyone was like, “We need explainable AI,” and then DARPA started handing out millions of dollars worth of grants to develop explainable AI. Historically, certain academic disciplines have had moments when they decided to reexamine their relationship with the military and police industrial complex. Consider anthropologists refusing to participate in the US military’s human terrain systems in Iraq and Afghanistan, for instance. But the ethics-in-technology communities haven’t had that kind of reckoning yet, where they start to deeply interrogate why they're asking the questions that they're asking. Because these technologies are moving so quickly, I think people in these research communities haven't had a chance to reflect on why they keep asking the questions that they're asking. Where do the questions come from? And why is it that they’re asking the exact same questions that DARPA is asking? And why isn't that entanglement ethically complicated for them? |
[Ed.: The idea of “libidinal economy” originates with Freud, was further developed by the French philosopher Jean-François Lyotard, and more recently has been taken up by Frank B. Wilderson III, Fred Moten, and other Black thinkers. It emphasizes that emotional intensities, such as desire or antiblackness, drive “rational self interest” or political-economic modes of thinking.] The libidinal approach is still too new. I don’t have the impact on social science research yet because I’m still talking about Black people. That’s my mistake. I didn’t try to claim white people are bad. That wasn’t my concern. My concern was to say, “Look at how joyous Black people are,” which is a very different thing. I have to give a talk to Microsoft in a month and they’re like, “Well, can you present your book in a way that makes it palatable to white people?” Nah, I can’t. And it’s not going to happen because you need to learn about me as opposed to learning what I do to resist you, which are two totally different things. | Data science and information science have long been and will continue to be resistant to theories like libidinal economy, but also to theories like critical race theory, because they are resistant to things which are not of them. They think about things which they can reach out and fix, like ethics. Or reach out and bring in, like the digital divide stuff. But they don’t ever want to engage with the question of how they benefit from certain structures, or how to fix the problems they’ve created. They don’t want to do that. |
JW: I didn’t know anybody in the Bay Area when I came here. I didn’t even know what books I was supposed to read. So I joined as many different communities as I could. I thought they could give me what I was missing. It can be hard to find that information, because there are so many limits in terms of what is chronicled. Most of the stories about Silicon Valley are written by very particular people who live very particular sorts of lives. There are so many stories that are not chronicled, like those of women and people of color. So you have to go find those stories in different contexts—more high-trust contexts, not public forums but Signal and phone calls. | Ultimately, I felt like I had to learn the right language to pass in the Valley. A lot of what you need to know is what you’re not supposed to talk about. In tech, you don’t really talk about money or power. Everyone’s default is, “I have no idea how much anyone earns.” Everyone dresses approximately the same and lives in approximately the same way. But some people wield much more power—orders of magnitude more. AL: A lot of queer culture is oriented around being very “out.” But something’s at stake, so you might want to twist your words to get what’s best for the company. I don’t know if I care about participating in mainstream queer culture anymore, because it’s just not made for Asians. I want to be representing me, I don’t want to be queer for the sake of being queer. So if there is a version of like, Sino-queerness, which draws from traditions of Taoism and the existence of two spirits within the body or whatever, that’s the version I want to be creating. The Bamboo Ceiling JW: Technologists are told that they only need technical competence, which is incidentally convenient for the bureaucratic governing class. This is perpetuated by the model minority myth, which says you only need to be good at STEM to have a good life, and that to be a good citizen you should keep your mouth closed. But the reality is that those skills only go so far. People talk more about the “bamboo ceiling,” but it’s always been there: Chinese-Americans might have technical competence, but they often don’t have the social capital needed to rise into C-level positions. Those are positions that require a lot of networking, a lot of buy-in, and often build off of generational privilege. Seeing which founders get funded and what they look like—that’s all about social capital, not engineering prowess. But then you look at those founders’ engineering teams and it’s a very different demographic makeup. |
These changes are affecting the content of contemporary politics. The transmogrification of the bedroom from a place of rest into a space in which people not only sleep but live and work—and, crucially, participate in political conversations online—goes a long way toward explaining both the form and content of contemporary politics. This turn brings both opportunities and risks. On the one hand, bedroom politics may offer an opportunity to re-embed causes that may seem lofty in everyday life. Making the political viscerally personal may help politicize more people. On the other hand, the convergence of the personal and the political may create a situation where we are no longer able to distinguish between the two. Regardless, there is no way to understand the logic of contemporary politics without thinking about the bedroom. | The Conquest of Bed Traditionally, the places of politics have been squares, streets, conference halls, churches, meeting houses, and party offices. In his influential analysis of the bourgeois public sphere, Jürgen Habermas observes that such sites are the physical equivalents of the news media: they are where ideas circulate. The philosophers Oskar Negt and Alexander Kluge argue for the existence of a proletarian public sphere, which coheres around humbler locations like factories, pubs, and working-class neighborhoods. |
Do you think the lack of internal communications tools is intentional? I think this is one of those cases where you should not presume malice. Amazon doesn’t actively not want employees to talk to each other. They just don’t see how employees talking to each other benefits productivity, morale, or the bottom line. If it did, and that impact could be quantified in some way, we'd have it tomorrow. But as it is, Amazon gives employees the tools that it thinks will help them get the job done. And they don't see employee fraternization as relevant to the job. | Google, by contrast, has very robust internal communication infrastructure. And that infrastructure played an important role in facilitating organizing at Google. (Although more recently, management has been limiting the kinds of conversations that can happen on internal platforms.) Do you think the absence of similar tools helps explain why Amazon has seen comparatively less organizing? Well, if you look at Google, you'll notice they're headquartered in Mountain View, in the heart of Silicon Valley. And if you're an employee at Google and you're good at your job and you want to leave your job tomorrow, there are fifty-three employers out there that are going to be ready to hire you. |
JS: That makes sense. But I’m also curious about the politics of the transition. It’s one thing to say the team is burned out, so we need to raise funding to pay people more fairly. But it also seems like, at least from my reading of “Logic(s): The Next Chapter,” that there is an ideological shift at work. Xiaowei, you and Khadijah write about wanting to focus on queer, black, and Asian identities, to bring in perspectives outside of the tech industry, and to engage with international issues. Is that in response to a new political moment? And how does that relate to the previous iteration of Logic? XW: We’re in a moment where there’s a lot of critical tech discourse that talks about the poor impacted people in a super patronizing way. Then you have groups like Allied Media Conference and the Detroit Community Technology Project that have a very different perspective, which is that people outside of the tech industry, people from marginalized backgrounds, actually have a lot of insights to bring to the discourse and are building new technologies. So part of our hope with the next chapter of Logic is to build community with that in mind, and shine a light on those who are already doing that work. And that can take different forms: as long as I’ve been part of Logic, it’s always felt like a place where multiple political threads could coexist. | BT: Xiaowei touches on an important continuity between the two eras of Logic, which is the emphasis on the agency, and indeed the wisdom, of people who are affected by technology. That is, not seeing them as objects or victims, but rather as agents, as experts, as subjects of their own liberation and of everybody’s liberation. |
The company probably benefits from people being kind and showing up for each other in cases like that. Definitely. I mean, on-call can go lots of ways. What I’m describing, even if I don’t love on-call, is being on a team with people I trust, knowing that I won’t get yelled at or fired for unintentionally doing something that causes damage, and knowing that there’s a genuine spirit of reflection around how to fail better. The thing that motivates me during on-call, much more than fixing the tech, is my teammates. There are things that are beyond our control: there’s a lot of failure on the internet and we don’t pick the days when a critical service goes down. If everyone is always exhausted and grumpy when they show up, that sucks for them and it sucks for me. So, almost always, if someone gets woken up in the middle of the night, another person on the team will offer to take over their shift the next night so they can get a full night’s sleep. Because waking up one night sucks, but waking up two nights in a row? You’re toast. We also encourage one another to ask for help and to offer help. If we’ve identified something that is really disruptive to each other’s day-to-day lives, we take that seriously and make changes so that that thing doesn’t happen anymore. That matters when you think about the fact that we are on call for holidays and weekends. There’s a lot of motivation for us to make on-call not terrible. So we are caring for infrastructure, but ultimately we’re taking care of each other. | “Taiwan is a paradise bubble,” my dad told me in March, during my first few days back at my parents’ home. “This is probably one of the safest places in the world right now,” he said. Seeing the rush hour crush on the Taipei metro and children in school uniforms clustering at bus stops after school, all without exhibiting signs of fear or anxiety, I couldn’t agree more. |
That sucks. We were told that if you know how to code, your record isn’t going to be that much of an issue just because the job demand is so high here. I haven’t started applying yet. Did you start The Last Mile knowing when your release date was? I was sentenced to sixteen years to life. What that means is that you do sixteen years and then you start going in front of a parole board. The parole board wants to see you take responsibility for your crime. They want to see how you’ve used your time in prison. They want to know what kind of support network you’re going to have in place to keep you from re-offending when you do get out. | It’s really hard for lifers to get found suitable for release. And it should be, I think. Maybe not as strict as it has been in the past—and the door is opening up a little more every day, especially with Jerry Brown in office. But I understand the parole board commissioners have a tough job to do. I went to my first parole board hearing in 2015 and I felt like I had done everything right—I’ve gone through The Last Mile, I was in the coding program, and they still said no. They said we want some more, to make sure that you have a deeper insight into why you committed your crime. |
Martin Pichinson is about seventy, a former music manager who came to Silicon Valley in the mid-1980s. His business partner is Michael Maidy, another septuagenarian who, judging from a Google search, favors dark suits that look about a half-size too big for him. Maidy was recently the CEO of another failed tech company: Pebble Tech, LLC, maker of smartwatches. Pichinson and Maidy look about as far from our image of the Silicon Valley CEO as you can imagine. But they are nevertheless an important, if rarely glimpsed, part of its ecosystem. | Their actual company is Sherwood Partners, and unlike Kozmo.com, Pebble, and about a thousand other companies they have wound down over the years, it (a) still exists and (b) its business is always booming. The company is Silicon Valley’s premier specialist in “assignment for the benefit of creditors” (ABC)—a process by which insolvent companies assign their assets, titles, and property to a trustee. |
Visualizing census data by district is invaluable for drawing legal districts. But a gerrymanderer needs more than just census data—they also need to predict election results by census block. They need to know the color of the stones, in other words. Election results, however, are not reported by census block, and methods of collecting and tabulating election data vary greatly by state. | The Voting Tabulation District (VTD) is the Census Bureau’s general term for the “unit” at which election data is reported. A VTD might be a voting district, a ward, or a county, depending on the state. Fortunately, these VTDs are themselves built from census blocks, so that the process of integrating this data involves disaggregating the VTD election data into its constituent blocks. |
The Pornhub Awards afterparty, a swanky affair held on the rooftop of affluent hipster mecca the Ace Hotel, was organized by Greg Lansky. The 2018 porn industry includes larger-than-life characters like Lansky—creator of big international brands like Vixen, Tushy, and the biggest interracial brand of all time, Blacked—who are attempting to be to today’s digital porn content what Playboy was to an earlier era of paper-based erotica. | Obviously inspired by Hugh Hefner in terms of exposure, projected lifestyle, and the desire to be embraced by mainstream publications, the French-born Lansky is known to pay top rate to his performers and spends lavishly on promotional stunts and PR. By crowning girls “Vixen Angel” of the month or the year, Lansky deliberately fosters an old-fashioned, glossy star system of sorts within the industry. His graphic design and aesthetics owe a lot to early 2000s French Vogue and American Apparel billboards. |
Fast forward a couple of years. We now have close to six hundred cameras all over Detroit, and the Mayor would like to push that number to 4,000. Project Green Light locations pay a monthly rate so that if something happens at that location, they get priority from police over non–Green Light locations. So they pay for policing. And then, of course, the DPD leadership signed a contract to use facial recognition on everything from drones, traffic lights, mobile devices—pretty much anything they could attach a surveillance camera to. They were using that facial recognition technology on footage from their Green Lights for about two years before social justice activists and technologists in the city really got wind of it, when DPD tried to push through a directive that would solidify their use of it. Before that proposal, there was no policy governing its use. | What did the rollout of Project Green Light look like within the community, and how did you start to organize folks against it? The police really tapped into this negative narrative that has hovered over Detroit for decades. They were touting Project Green Light as the salvation of the city, like, “Hey, we have the answer. We know you are all afraid of each other, so we're going to give you all these security cameras, and you’re going to feel safe because we're always going to be watching you.” The police department’s campaign targeted senior citizens mostly—the ones who are retired or sitting at home, inundated with media images. They didn't hide the idea of putting cameras everywhere like they later hid their adoption of facial recognition. In fact, they were trying to separate the two ideas, saying that Project Green Light was different than facial recognition. But the two systems needed each other; the city needed to overhaul itself with all these surveillance cameras in order to make facial recognition a viable system. The DPD just kept saying, “Facial recognition is not embedded within the cameras,” and, “We'll only use the facial recognition if we absolutely have to.” Meanwhile, they were proposing policy directives that were asking for real-time live tracking on mobile devices, drones, traffic lights, and more. |
Software engineers now expect to be on call and sometimes get called in the middle of the night. Were you on call for these systems? Yes, I was. All thirty-some years. We would have two or three people in a group, so we could rotate. But then we got people in that had grown up around PCs, and they just don’t appreciate that computers have to operate 24/7. How did being on call for thirty years affect your life? Since I did it my whole life, it’s hard to say. I was limited in how far away I could go for a social event since I had to be able to get back to work if called. And limited on drinking since I had to be able to drive to work at any time. | Most nights I didn’t get called, but then some nights would be bad and I’d get two to three calls in one night. Other times, I wouldn’t be called but once a month. When I got towards the very end of my career, it was less than once a month because we kept so on top of those storage groups, made sure we had enough space on them. |
The religious and political transgressions of these detainees were frequently discovered through social media apps on their smartphones, which Uyghurs are required to produce at thousands of checkpoints around Xinjiang. Although there was often no real evidence of a crime according to any legal standard, the digital footprint of unauthorized Islamic practice, or even an association to someone who had committed one of these vague violations, was enough to land Uyghurs in a detention center. Maybe their contact number had been in the list of WeChat followers in another detainee’s phone. Maybe they had posted, on their WeChat wall, an image of a Muslim in prayer. It could be that in years past they had sent or received audio recordings of Islamic teachings that the Public Security Bureau, which polices social life in China, deems “ideological viruses”: the sermons and lessons of so-called “wild” imams, who have not been authorized by the state. Maybe they had a relative who moved to Turkey or another Muslim-majority country and added them to their WeChat account using a foreign number. The mere fact of having a family member abroad, or of traveling outside China, as Alim had, often resulted in detention. Not using social media could also court suspicion. So could attempting to destroy a SIM card, or not carrying a smartphone. Unsure how to avoid detention when the crackdown began, some Uyghurs buried old phones in the desert. Others hid little baggies of used SIM cards in the branches of trees, or put SD cards containing Islamic texts and teachings in dumplings and froze them, hoping they could eventually be recovered. Others gave up on preserving Islamic knowledge and burned data cards in secret. Simply throwing digital devices into the garbage was not an option; Uyghurs feared the devices would be recovered by the police and traced back to the user. Even proscribed content that was deleted before 2017 —when the Public Security Bureau operationalized software that uses artificial intelligence to scour millions of social media posts per day for religious imagery—can reportedly be unearthed. | Most Uyghurs in the detention centers are on their way to serving long prison sentences, or to indefinite captivity in a growing network of massive internment camps which the Chinese state has described as “transformation through education” facilities. These camps, which function as medium-security prisons and, in some cases, forced-labor factories, center around training Uyghurs to disavow their Islamic identity and embrace the secular and economic principles of the Chinese state. They forbid the use of the Uyghur language and instead offer drilling in Mandarin, the language of China’s Han majority, which is now referred to as “the national language.” Only a handful of detainees who are not Chinese citizens have been fully released from this “re-education” system. Alim was relatively lucky: he had been let out after only two weeks; he later learned that a relative had intervened in his case. But what he didn’t know until police arrested him at the mall was that he had been placed on a blacklist maintained by the Integrated Joint Operations Platform (IJOP, or 体化联合作战平台), a regional data system that uses AI to monitor the countless checkpoints in and around Xinjiang’s cities. Any attempt to enter public institutions such as hospitals, banks, parks or shopping centers, or to cross beyond the checkpoints of the dozen city blocks that were under the jurisdiction of his local police precinct, would trigger the IJOP to alert police. The system had profiled him and predicted that he was a potential terrorist. Officers told Alim he should “just stay at home” if he wanted to avoid detention again. Although he was officially free, his biometrics and his digital history were being used to bind him in place. “I’m so angry and afraid at the same time,” he told me. He was now haunted by his data. Unlimited Market Potential The surveillance and predictive profiling systems that targeted Alim and the many Uyghur Muslims he met in detention are the product of a neo-totalitarian security-industrial complex that has emerged in China over the past decade. Dozens of Chinese tech firms are building and marketing tools for a new “global war on terror,” fought in a domestic register and transposed to a technological key. In this updated version of the conflict, the war machine is more about facial recognition software and machine learning algorithms than about drones and Navy SEAL teams; the weapons are made in China rather than the United States; and the supposed terrorists are not “barbaric” foreigners but domestic minority populations who appear to threaten the dominance of authoritarian leaders and impede state-directed capitalist expansion. |
No single technology contributes more powerfully to our perpetual data hemorrhage than the internet, of course. The internet both facilitates the flow of data and constantly creates more of it. It goes without saying that everything we do online leaves a trace. And companies are working hard to ensure that we leave more traces, by putting more of our life online. | This is broadly known as the “Internet of Things”: by placing connected devices everywhere, businesses hope to make corporate surveillance as deeply embedded in our physical environment as it is in our virtual one. Imagine a brick-and-mortar store that watches you as closely as Facebook, or a car that tracks you as thoroughly as Google. This kind of data capture will only grow in coming years, as the already porous boundary between online and off disappears. |
Criminal Justice Creep New data sources are incorporated into Palantir regularly. One captain commented: I’m so happy with how big Palantir got… I mean it’s just every time I see the entry screen where you log on there’s another icon about another database that’s been added … they just went out and found some public data on foreclosures, dragged it in, and now they’re mapping it where it would be relative to our crime data and stuff. | Another interagency data integration effort is LA County’s Enterprise Master Person Index (LA EMPI) initiative. If established, LA EMPI would create a single view of an individual across all government systems and agencies: all of their interactions with law enforcement, social services, health services, mental health services, and child and family services would be in one place under a single unique ID. Although the explicit motivation behind the EMPI initiative is to improve service delivery, such initiatives extend the governance and social control capacities of the criminal justice system into other institutions. |
Sex, like computing, puts one body in a feedback loop with another. Each body sends signals to the other, and each modulates its actions accordingly. But with cybersex, which overlays the principles of computing onto sex, the feedback cues are configured by the interface’s materiality—a materiality itself configured by the surrounding economic system. | Within the enabling constraints of the RTI Network, the cam worker cultivated an intimate, cybernetic relationship to the sensors distributed throughout the device, to the algorithms that rendered her movements visible and tangible, to the latency of the network that transmitted her commands, and to the body of the subject that she labored on. Her labor generated revenue for AEBN, by producing a simulated sexual experience for a paying customer. But the simulation, for all its attention to realism, omitted a crucial aspect of sex. If sex involves two people both giving and receiving sensation, the RTI Network imposed a different division of labor: the female worker gave the sensation, and the male customer received it. This reflected AEBN’s vision for the future of technologized sex, articulated most pointedly by EJ in HBO’s 2014 Sex/Now feature: We’re going to take sex over the internet into the future. Sex over the internet started with still images, then you could download a video clip. So what is the next thing? The next thing is being able to actually have sex over the internet[…]We’ve always said that on that day when a girl in Romania can reach out and touch your penis, that’s the beginning of something completely new. |
But to be able to wrestle with these questions, we need to change the language we use to think about engineering and technology. Saying engineers “solve problems” implies a kind of mathematical tidiness that doesn’t reflect our messy reality. This language suggests that problems just disappear or are neatly contained through technologies. Yet if Mexico City’s floods are any indication, we should instead talk about how engineers transform problems. | This subtle shift in language brings our attention to the fact that any “solution” produces, inevitably, more and different problems—many of which may not be visible in the moment or place it is implemented, or to the particular group of people designing the intervention. This seems to be, at first glance, obvious. We often say that a given tool “creates more problems than it solves.” Yet the idiom is rarely taken to heart—even if, as engineers, we talk about tradeoffs and generate cost-benefit analyses of different “alternative solutions.” Anyone who has ever worked in an engineering firm or the government knows that these are inevitably influenced by our own biases and interests, whether conscious or not. Furthermore, not every effect of an engineering solution can be quantified in dollars and placed into our analysis. |
Recently, however, this optimism has begun to unravel. The problems of technology have come into sharper focus. But this has brought difficulties of its own: technological power today operates in distinctive ways that make it both more dangerous and potentially more difficult to contest. First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce. | On the consumer side, this places Amazon in a unique position to target prices and influence search results in ways that maximize its returns, and also favor its preferred producers. On the producer side, Amazon can make or break businesses and whole sectors, just like the railroads of yesteryear. Book publishers have long voiced concern about Amazon’s dominance, but this infrastructural control now extends to other kinds of retail activity, as third-party producers and sellers depend on Amazon to carry their products and to fairly reflect them in consumer searches. |
In the West, the Chinese internet is mostly depicted in negative terms: what websites and social platforms are blocked, what keywords are banned, what conversations and viral posts are scrubbed clean from the web overnight. This austere view is not inaccurate, but it leaves out what exactly the nearly 750 million internet users in China do get up to. | Take a look at bullet comments, and you’ll have a decent answer to that question. They represent the essence of Chinese internet culture: fast-paced and impish, playfully collaborative, thick with rapidly evolving inside jokes and memes. They are a social feature beloved by a generation known for being antisocial. And most importantly, they allow for a type of spontaneous, cumulative, and public conversation between strangers that is increasingly rare on the Chinese internet. |