prompt
stringlengths
300
3k
completion
stringlengths
300
3.12k
Yeah, exactly. It’s a way to manipulate you into feeling guilt for demanding more, for expecting a reasonable work-life balance. Looking For Group You’re currently involved in organizing within the game industry. How did you get involved with that? When I started at my company six years ago, I had no nuanced political analysis, and no real idea that I could be bargaining for anything. I was just drinking the company Kool-Aid, and accepting everything they said. I really believed that the company was taking care of me. Only in the last three years have I started to piece things together through my own experiences and reading stories of labor abuses through the industry, which go back thirty to forty years—pretty much since the industry started.
I first got involved with GWU after the March 2018 GDC, which is the major annual industry conference for game developers. During GDC, there was what was effectively an anti-union panel, which galvanized people to join the organization. After I read about that, I inquired a bit more and reached out to folks on Twitter.
He often thinks about Monsanto, where his dad worked: although anything but open-source, the company has mastered the art of data collection. Its extensive laboratory research on corn and soy has given it an enormous competitive advantage. For digitized agriculture to fulfill its potential, it has to follow Monsanto’s lead on data. It must standardize how it gathers information, and develop a process for converting that information into knowledge that improves the growing process. If and when OpenAg hacks its way to a platform where users can post and exchange climate recipes, the open-source model could prove a valuable tool for commercial ventures—much as Linux, Apache, and countless other projects have for the tech industry.
Open-Source Capitalism For all its idealism about disrupting Big Ag, OpenAg may ultimately help strengthen it—or reproduce it in a different form. Just because open-source projects give their source code away for free doesn’t mean they aren’t immensely useful for commercial enterprises. Today’s tech giants rely heavily on open-source projects.
Shirley realized that they would. She had seen the need in both government and industry for programmers who could unleash the potential of expensive hardware with good software. Without software, after all, computers didn’t do anything, and with poor software they couldn’t fulfill their potential or justify their cost. Shirley also knew that British industry and government were getting rid of most of the people who had programming skills and training because they were women, thereby starving the entire country of the critical labor that it needed to modernize effectively.
Shirley scooped up this talent pool by giving women a chance to fulfill their potential. Offering flexible, family-friendly working hours and the ability to work from home, her business tapped into a deep well of discarded expertise. Because people who could do this work were an absolute necessity, the government and major British companies hired her and her growing team of women programmers to do mission-critical computer programming for projects ranging from payroll and accounting to cutting-edge projects like programming the “black box” flight recorder for the first commercial supersonic jet in the world: the Concorde.
Visibility, in the SCM context, is itself highly selective. Learning To See The challenges are political as well as technical, in other words. And the political challenges are immense. In the absence of real efforts to create democratic oversight of supply chains, we’ve come to see them as operating autonomously—more like natural forces than forces that we’ve created ourselves.
In 2014, the Guardian reported that Burmese migrants were being forced into slavery to work aboard shrimp boats off the coast of Thailand. According to Logan Kock of Santa Monica Seafood, a large seafood importer, “the supply chain is quite cloudy, especially when it comes from offshore.” I was struck by Kock’s characterization of slavery as somehow climatological: something that can happen to supply chains, not just something that they themselves cause.
When the pandemic began, I had been in New York. Throughout February and early March, I checked in frequently with my parents in Taiwan. Things are fine here, they said. Meanwhile, the situation in New York was worsening. Cases were beginning to appear, but the government response was hesitant and nebulous. The virus is coming, warned the media. It may already be here.  If it’s already here, I wondered, why aren’t we doing anything about it? Why is everything continuing as normal? The situation felt out of control from the start. My parents urged me to come to Taiwan. On March 14th, 2020, I flew to Taipei on a direct flight.  My parents were right. From what I can tell, apart from masks on every face, life in Taiwan is uninterrupted by the pandemic. Schools, pharmacies, post offices, convenience stores, and parks are all open. Coffee shops, in abundance in Taipei, are full of people. Even the shopping malls are operating at regular capacity. Aside from a handful of attendants stationed at the front entrances, armed with temperature readers and hand sanitizer spray, every store remains open.
Early on, experts predicted that Taiwan, due to its close proximity to mainland China, would have the second-highest number of COVID-19 cases outside of the mainland. But the predicted wave of infections never materialized. As of late June 2020, Taiwan had reported only 447 known cases and 7 deaths. Compared with worldwide figures, these numbers are shockingly low; by comparison, more than 22,000 have died in New York City alone.  As countries around the world struggle to contain the virus, many observers are looking with great interest at Taiwan’s success. Rarely making international news and typically only in connection with mainland China, Taiwan has lately been held up by journalists and academics as a model for how to manage the pandemic. How did Taiwan, with a population of twenty-three million, eighty-one miles away from mainland China, with over 800,000 citizens working there and frequently traveling back and forth, manage to avoid the public health crisis that is now destabilizing the rest of the world? One part of the answer, I discovered, is a unique mix of technological interventions—some led by the government, others coming from the grassroots—that have helped coordinate the massive mobilization of people and resources required to fight the virus.  Fenced In When I arrived in Taipei, I sailed through the airport. “Where are you coming from?” a health official asked. “New York,” I said. She took a health self-assessment form that I had completed, handed me a slip of paper with instructions for how to monitor my wellbeing, and waved me on. I had expected a more rigorous interrogation.  It turns out I had returned to Taiwan just in time. Had I landed two or three days later, when the authorities raised the United States’s travel advisory from Level 1 to Level 3, my experience would have been radically different. My friend Ting wasn’t so lucky. She arrived later than I did, also traveling from New York. And it was through her experiences that I first began to learn about the role of technology in Taiwan’s pandemic response.
Crushing the Competition The irony of the corporate argument against municipal broadband is that private providers hate competition. Throughout the United States, privatized internet markets have led to the exact monopolistic conditions that free-market hawks rail against: a 2016 FCC report found that only one or zero providers offered services qualifying for the federal definition of broadband in over half of the developed census blocks in the United States. Less than half of the country’s developed census blocks have access to 100 Mbps service, and less than a quarter of those have more than one provider offering service at those speeds.
The resulting lack of competition leads to inflated prices, little incentive to modernize infrastructure, and shoddy service for poorer areas. Stagnation, inefficiency, and unfair consolidation are produced by private telecoms like Comcast, not public utilities like the EPB. Confronted with the oft-repeated argument that a publicly owned fiber network smothers competition and hurts consumers, Mayor Berke is uncharacteristically blunt: “I’ve seen no evidence of that.” Indeed, Chattanooga’s internet market is one of the most competitive in the country. The EPB isn’t the city’s only internet provider—the utility competes against four private ISPs, two of which also offer broadband-speed internet at a fraction of the national average. But in head-to-head competition, the EPB dominates. Its market share is larger than its four private counterparts combined.
And where would we go? Aluna asks. She already mentioned us hitting up friends who live near Cambridge, ones who haven’t experienced climate failure outside of the dearth of food lining the shelves. We’ve held off because that shame flares up. None of our community has the bandwidth for that type of support no matter how much we wish they could provide it. They ask us what happened, if we’re alright, how they might help, and we dodge each question enough to allow them an out. They always take it.
We can’t go to them, not now. While Boston as a whole might be running relatively fine, the pockets where they’ve shoved us will soon boil over. I sigh and send Aluna a message. Even though we’re sitting across from each other, I find a way to avoid her eyes. There’s a spot little less than an hour out of town owned by Centra. Part of their data infrastructure, and it’s quiet. We don’t need to be there for long… Just give it time for things to settle over here, a few days at most—the SUV will hold its charge, so we won’t have any problems coming in and out. A mini-retreat? She doesn’t even crack a smile. Right before dinner, we get up and leave. Just like that. There’s nothing for us to bring along. All that’s left of our submerged home is in our SUV, sitting beneath platinum LED streetlights.
The need to overcome these environmental challenges has only bolstered our existing, highly globalized food system. States import and export goods in attempt to feed populations and stabilize their own economies. The “food miles” add up; the emissions accumulate. Corporations reap incredible profits in the process.
And nevertheless, global hunger is rising. People are moving into cities at unprecedented rates. Farmers are aging out of the business worldwide. The corporations that control much of the food supply continue to consolidate. Given these pressures, a set of entrepreneurs want to build a climate-agnostic form of agriculture. What if you could grow avocados in Jordan and olives in the Arctic? Deploying the latest developments in automation, machine learning, and computer vision, these high-tech food startups grow indoors, nourishing plants through hydroponic or aeroponic systems and rhythmic exposure to LED light.
Even with decades of scientific work proving that our technologies have endangered the very survival of our (and countless other) species, our obsession with economic growth at all costs has barely budged. But if we are to listen to Silicon Valley entrepreneurs and their allies in government and academia, we should not worry about changing our collective way of living on the planet: climate change is simply a problem that can be solved with “disruptive” new engineering innovations, from carbon capture and storage to electric cars.
Yet the story of Mexico City’s struggles over water suggest that we should be skeptical of claims that environmental problems are ever neatly solved through technologies like these. I once asked a Tesla executive who came to Stanford to give a talk whether creating cheap and efficient electric cars wouldn’t simply encourage more driving, more cars, and, further down the line, crises related to lithium mining for batteries in places like Bolivia. (The idea that people with Teslas would drive more is an example of what economists refer to as the “rebound effect”: if you make something more efficient—and hence reduce the cost—people will tend to use it more, whether it’s driving electric cars or taking advantage of flood control infrastructure to build houses in a floodplain.) The executive responded by saying that “those are questions for philosophers—next question?” These are not questions for philosophers. They are questions for all of us—and especially engineers.
Inside Voices If the problem with scale is less literal size and more these forms of concentrated, unaccountable power, then what are the remedies? As technology creates new kinds of infrastructural power, we need to develop new tools for holding that power accountable. We can imagine a few different strategies. First, we might turn, like Berle and Means, to the internal politics of technological firms themselves. We might push for the creation of independent oversight and ombudsman bodies within Facebook, Google, or other tech platforms.
For these bodies to be effective and legitimate, however, they would need to have significant autonomy and independence. They would also need to be relatively participatory, engaging a wider range of disciplines and stakeholders in their operations. Precisely because of the infrastructural nature of their power, technology platforms affect a wide range of groups: workers within the firms, consumers more broadly, small businesses and producers, media companies, and the like. Any attempt to create internal checks and balances will have to find ways to engage these different constituencies and provide them with a channel through which to raise concerns and flag problems to be resolved.
Fail Better Silicon Valley thinks it has failure figured out. Even beyond the clichéd embrace of “failing better,” a tolerance for things not going quite right is baked into the tech industry. People take jobs and lose them, and go on to a new job. People create products that no one likes, and go on to create another product. People back companies that get investigated by the SEC, and go on to back other companies. They can even lie on behalf of a company like Theranos without any taint whatsoever. In Silicon Valley, it seems, there is no such thing as negative experience.
The attorneys and consultants who have grown old with the industry’s failures, from Pets.com to Pebble, are anything but harsh in assessing their “clients.” “They are not bad,” one old hand insists. Instead, “the question really becomes: how many new ideas can society handle?” Even Sherwood Partners doesn’t see themselves as a repository of Silicon Valley’s fuck-ups. To them it’s about luck, bad timing, the wrong blend of personalities. “They didn’t fail, they just didn’t come in first.” That can be deeply charming: rather than make failure, messiness, and growth something to hide, the ethos of the tech industry puts fallibility and vulnerability at the center of life. The guys at Sherwood have some of that relaxed California vibe, plus a dose of paternalism—they wind down companies started by people less than half their age. They try to make it a teachable moment and move on.
Through the 1970s and into the early 1980s, most of the folks who used to be on the communes are still in the Bay Area. And the tech world is bubbling up around them. They need work, so many of them start working in the tech world. The folks associated with the commune movement—particularly Stewart Brand and the people formerly associated with the Whole Earth Catalog—begin to reimagine computers as the tools of countercultural change that they couldn’t make work in the 1960s.
Stewart Brand actually calls computers “the new LSD.” The fantasy is that they will be tools for the transformation of consciousness—that now, finally, we’ll be able to do with the computer what we couldn’t do with LSD and communes. We’ll be able to connect people through online systems and build new infrastructure around them.
To this day I know only one or two scientists who like talking this way. And there are good reasons why scientists remain very wary of this kind of language. I belong to the Defend Science movement and in most public circumstances I will speak softly about my own ontological and epistemological commitments. I will use representational language. I will defend less-than-strong objectivity because I think we have to, situationally.  Is that bad faith? Not exactly. It’s related to [what the postcolonial theorist Gayatri Chakravorty Spivak has called] “strategic essentialism.” There is a strategic use to speaking the same idiom as the people that you are sharing the room with. You craft a good-enough idiom so you can work on something together. I won’t always insist on what I think might be a stronger apparatus. I go with what we can make happen in the room together. And then we go further tomorrow.
In the struggles around climate change, for example, you have to join with your allies to block the cynical, well-funded, exterminationist machine that is rampant on the earth. I think my colleagues and I are doing that. We have not shut up, or given up on the apparatus that we developed. But one can foreground and background what is most salient depending on the historical conjuncture.
GW: It’s interesting when you think about dividing these more mysterious senses from the better understood ones. What is it that makes senses like vision or touch less mysterious? It’s that we can both look at the same thing and think to ourselves, “We had the same experience of that.” Or we can both run our hands over the same surface and feel more or less the same thing.
But then you start getting into other sensations. Let’s say we both kissed the same person—would it actually be the same? Or, going deeper, could I ever feel what the inside of your stomach feels like? Could I know what it is to experience the back pain that you feel? Could I as a man know what it is to undergo labor? That kind of empathy is not accessible to us with current technologies. But I’ve always thought that modern dance—something that you and I share a passion for—captures a bit of that quality. What I love most is watching dances where I feel like I’m getting inside the subjective physical experience of the dancer. That’s really what sets dance apart from other art forms for me.
Shun, sixty years old, used to work as the secretary of the Communist Party branch committee of a village in Henan province. He only recently became an active user of WeChat, a popular messaging and social-media app, after it began to offer more video content. Due to his lack of formal education, Shun says that he finds “watching short videos on the news posted by young people a more efficient way of information-seeking than reading long articles.” He belongs to a WeChat group chat for members of his village and, having silently followed the conversation for more than a year, he now shares videos that he finds informative with young villagers, hoping to connect with the new generation.
Online video also offers an essential source of information for acquiring new skills, especially for female villagers who are learning how to knit, dance, and cook. Feng, a forty-three-year-old housewife, has become the center of her social circle in her village by introducing group dancing techniques to her girlfriends. Feng has fewer than three years of formal schooling, but she followed tutorial videos on a popular mobile app called “Tangdou Group Dancing App.” Feng discovered these videos because of a recommendation algorithm. Video apps implement sophisticated tools to tailor personalized content for their users, and this is often how inexperienced internet users in rural China find content. The algorithm-driven recommendation systems are frequently mentioned by interviewees in my fieldwork in Henan Province, a less developed area in the central part of the country. They consider recommendations an easier and more intuitive way to discover new videos than search engines such as Baidu, which require some experience to know how to use effectively.
In early 2017, an investigative journalist uncovered a private Facebook group called Marines United, where hundreds of veterans and active-duty marines were circulating nude or invasive photos of military servicewomen without their knowledge. “Dozens of now-deleted Google Drive folders linked from the Facebook page included dossiers of women containing their names, military branches, nude photographs, screenshots of their social media accounts and images of sexual acts,” the journalist, Thomas Brennan, later wrote for the news site Reveal.  The Marines United scandal stood out for the coordinated nature and scale of the abuse, but it was only one egregious example of how toxic social media platforms have become. According to a study by the Pew Research Center in 2017, 41 percent of Americans have personally been subject to abusive behavior online, and one in five have been the targets of particularly severe forms such as sexual harassment, stalking, revenge porn, physical threats, and sustained harassment over time. Those who experience online harassment suffer from mental or emotional stress or even fear for their personal safety, and the stakes are particularly high for young internet users: other studies have found a significant association between cyberbullying and depression, self-harm, and suicidal ideation in people aged 12–18.  Following the Marines United revelations, Facebook quickly shut down the private group, but similar ones immediately began cropping up on the platform. Most major social media platforms ban sexually explicit photos, particularly those flagged as non-consensual, so if a victim sees the content they can report it. But this doesn’t keep the images from being shared on private groups where they are less likely to be reported. As a result of mounting pressure from advocacy groups for victims of sexual assault, Facebook promised to take a more active role in addressing such harassment.  The result, which Facebook began rolling out in April 2017, and which now includes partnerships with nine organizations across eight countries, is one of the most proactive efforts by any of the social media companies to address online abuse. And yet, despite the amount of time and money that Facebook has spent on the program, the whole thing is ultimately doomed to fail. It’s a revealing failure, though, because it points to fundamental limitations in the way that social media companies think about—and have encouraged the rest of us to think about—the problem of online harm.
The Narrowsight Board Facebook’s process for addressing revenge porn and other non-consensual sexual images requires victims to upload their images to the company, where a “specially trained” employee reviews the image and then creates a digital fingerprint of it. This allows image-matching software to detect if the same photo appears elsewhere on the site or is later uploaded again. This can potentially help some victims, but it has major shortcomings. It assumes that the victim has access to the non-consensual image, and it requires that the victim trust Facebook with extremely sensitive content. The image-matching software is also remarkably easy to fool; slight alterations to an image, such as changing the background, have been shown to elude the technology.  More importantly, the technology takes control away from the victim by assuming that deleting the images automatically is all that a victim wants. The victim never learns whether anyone else has tried to upload the images and has no proof for further action. Platforms even routinely ignore “preservation letters” from lawyers of victims of revenge porn, and delete crucial evidence.  At the most basic level, Facebook’s process, like other attempts to address online harm, suffers not from faulty algorithms, but from a crucial misrepresentation of the problem. Social media companies have construed a wide range of online harms as essentially problems of content (violating photos, violent or threatening posts, Nazi symbolism). As a result of this framing, the solution to online harm has largely been presented by these companies as “content moderation”: removing posts that a platform deems against the rules or toxic, and occasionally banning the user who posted the content.  Social media companies have a strong incentive to adopt the content moderation framework, which was originally developed to minimize spam, and all of the large social media companies moderate content to some extent. That’s because the quality of their platforms would spiral downward if they didn’t. Imagine logging into Facebook and seeing an unabated stream of violent images and junk messages—you probably wouldn’t want to log in again. Most advertisers don’t want their ads to show up next to such content either. A decline in user engagement and ad sales is bad for a social media company’s bottom line, and removing potentially offensive content is the cheapest way to ensure that doesn’t happen.
Stam had backups of those sensor readings through the machinery’s software interface, but they had been remotely wiped by the robotics company, which had access to the readings through a maintenance contract. Luckily, and very unusually, Stam had additional independent backups. He used his own data to correlate the milk sensor readings with the onset of his cows’ illness, prove the robotics company’s liability, and win damages for his losses. But he is the exception that proves the rule.
Beyond 2FA Now that food systems globally are increasingly vulnerable to digital manipulation, farmers need to protect themselves. The FBI’s 2016 joint memo with the USDA encourages both small and large-scale agriculture operations to implement standard digital security measures: using secure passwords, setting up two-factor authentication, accessing networks via VPNs, and having company-specific email accounts for employees.  But perhaps adding more technology to the technology problem is not the solution. One issue that extends well beyond agriculture is that there is no clear line of liability for the security of equipment and devices. To help demarcate where the responsibility lies, venture funds, development agencies, and government procurers could implement something like the US Department of Justice’s new policy on drone use. It requires partners and grantees to conduct a mandatory cybersecurity risk assessment and honor a data retention policy. Imposing such requirements on small organizations would be a financial burden, but perhaps such a cost is worth paying to protect farmers and food supplies.  While these steps are necessary, they are not sufficient. We have a globally connected food system, and ameliorating harm requires systems thinking. Hacks are inevitable when we use connected technologies; the more we become reliant upon them to bring in our harvests, the more we can be assured that these systems will be exploited.
Although most of Sherwood’s work is with investors, employees, and vendors, they also hold a massive database of patents amassed from their assignees. “We probably monetize more patents than anyone else in the world,” Pichinson says. And he’s not wrong: Agency IP, Sherwood’s sister company, is nominally a consultancy, but in fact spends most of its time actively exploring the applicability of patents left behind by the companies Sherwood has buried. Like what William Morris agency does for screenplays, says Pichinson, who now operates from LA’s “Silicon Beach.” He makes it sound glamorous.
You Can’t Get Rid of Wealth The guardian angels of better failure in Silicon Valley are the investors. When men like Pichinson are pretty Zen about failure, it makes sense—after all, it’s their business. When lawyers who charge by the hour seem okay with failure, then sure, why not, they get paid one way or the other. But what about the investors who sink money in ventures and either get some of it back or none of it back? It’s easy to assume that the shrug with which they treat every flop is a facade. It’s unnerving to realize that it’s absolutely not—and for good reason.
That said, founders in the crypto space still aim to raise orders of magnitude more in capital for projects that haven’t even launched yet, compared to venture-backed founders that have spent years building a profitable business. Last year, a project named Block.one raised at least $700 million for its tokens—even though all it had produced was a fourteen-page white paper. The shopping startup Stitch Fix, by contrast, raised less than one-fifth as much during its recent IPO, even though the business pulled in nearly $1 billion in revenue this fiscal year.
Whether ICOs will cannibalize part of the venture industry—or implode out of existence—is uncertain. What’s clear is that venture capital, or something conceptually like it, will continue to exist for the foreseeable future. Institutional investors that are managing billions of dollars of assets don’t have the time or bandwidth to evaluate tens of thousands of nascent projects to try to determine which ones might grow big enough to deliver large returns.
Crushing Substernal Pain In the early 1960s, Maryann Bitzer was pursuing her master’s degree in educational psychology at the flagship campus of the University of Illinois. The university also employed her husband, Donald, who was using his engineering doctorate to investigate whether computers could be used effectively for education. Throughout the 1960s and into the 1970s, Donald led a team of researchers, including Maryann, in developing a computer network known as PLATO, Programmed Logic for Automatic Teaching Operations. PLATO comprised individual user terminals connected to a mainframe computer and, through the mainframe, to each other. The network went through several evolutions, and by the mid-1970s it included nearly 1,000 terminals around the United States, each with a flat-panel plasma touch screen, with applications including games, instant messaging, screen sharing, and email.
In a decision that ultimately benefited both of them, Maryann focused her master’s thesis on how computer-based education could work in nursing. She cited two motives for her study: a dearth of trained nursing instructors across the country, and the tremendous educational value for nursing students of working with “actual” patients. Using one of the early iterations of PLATO, which employed custom keysets and television-like cathode-ray tube screens, Maryann developed a course on treating heart attack patients. Then she delivered it to first-year nursing students at the university-associated Mercy Hospital. The course imaginatively integrated several components, immersing students in what sometimes seemed, behind the gloss of the new technology, like an actual experience of care.
Technosociologist Zeynep Tufecki makes the point that traditional political movement tactics have gotten easier over the years, “partly thanks to technology”: A single Facebook post can help launch a large march! Online tools make it easier to coordinate phone calls, and even automate them. Legislators have figured this out; they are less likely to be spooked just by marches or phone calls (though those are good to do: their absence signals weakness).
Her point is that tactics which once signalled “underlying strength” no longer do, by virtue of the ease of re-iterability; the threat is neutralized and the ruling order knows it. The same might be said of sexual practices which once were considered threats to capital’s reproduction through the family form and property relations. Technocapital soothes the status quo: there can be polyamorous configurations with BDSM dungeons in the basement, but the houses are owned.
So perhaps I can’t, we can’t, talk about sex non-confessionally; it’s a discourse constructed on the idea of revelation. That’s how truths about sex, or anything, are built—in the false belief that they are “found.” That’s what these sex stories are about: the myth of revelatory sex, and the truths it produces.
One is about a threesome I didn’t have, another is about certain porn that I don’t watch. They both involve an ex-partner whom I dated from my early-to-mid-twenties who believed in revolutionary sex to the point of ideology. These are cautionary tales in how easily invocations towards radical sexual practices—especially in the context of political movements—can be recuperated into patriarchal power structures, techno-capital, and the creation of more bourgeois desiring machines. And through them, I want to question what it means to talk about radical sex becoming recuperated at all.
At the same time, Silicon Valley’s tolerance for failure has long sustained an obsession with youth. If a founder fails, tech discourse interprets it as a sign of young vigor. In a country in which twenty-five-year-old white rapists are “still boys” and black twelve year-olds on the playground “look like adults,” the question of who gets to be a kid and who counts as a grown-up is clearly charged with privilege.
In 2017, a chastened Travis Kalanick admitted, “I must fundamentally change as a leader and grow up.” Even in a place as choc-a-bloc with balding skateboarders and middle-aged trick-or-treaters as San Francisco, a forty-year-old CEO of a $15-billion company casting himself as an overenthusiastic kid who just needs to get his shit together is a bit much.
McKinsey & Company considers agriculture to be a “massive opportunity” and an area “ripe for disruption.” Goldman Sachs expects the market to soon be worth $240 billion. Despite questionable returns, agriculture has become a darling of VC; according to the VC fund AgFunder, agricultural technology startups secured nearly $17 billion in funding in 2018, up 43 percent from 2017.  Academic institutions, too, have turned their prowess towards fostering the development of new agricultural technology through startup accelerators and enhancing technologies of established corporations. Indeed, according to food systems expert Kevin Walker, academics in the US are becoming subcontractors of large multinational corporations, driven by the need to find funding for research that has “immediate and marketable benefits.” The US government and military are also invested in the research and development of precision agriculture tools. The US Navy has invested in agricultural robotic swarm technology. Many of the startups developing drones for commercial agriculture are run by military veterans or former defense contractors. Even economic development sector initiatives rely upon defense contractors.
These types of funding streams yield specific results. The demand by VCs and universities for exponential returns in a short time frame means that security is, as with many non-agricultural venture-backed software startups, an afterthought. Hardware is adapted from other sectors and software outsourced, with no plan for service, repair, or maintenance, leaving pieces of equipment perpetually vulnerable to exploitation when it reaches end-of-life. And, even if they wanted to, private companies trying to translate government research and development into consumer agricultural products don’t have the billion-dollar budgets to prioritize security in the same way that DARPA might.  Breach Party Many of the same security vulnerabilities that plague battery-powered, cloud-connected devices in general affect farmers adopting precision agriculture. In an interview with precision agriculture expert Marc Window, a professor associated with Hands Free Hectare said of the project’s approach to security, “As with most of our ag robot developments we use technology that has been developed outside the ag area and migrate it over as and when needed. Security has not been a hot topic at all recently as we are still getting the fundamental systems working. Mostly we just use Wi-Fi.” This approach to (not) securing connected agricultural machinery is the norm, and it means that, in the same way that corporations can remotely brick a piece of machinery, malicious actors could hypothetically do the same, bringing a fleet of tractors down at once. One can imagine a scenario in which a piece of code is deployed to disrupt the harvest of entire nations. Or a scenario in which chicken farmers who use web-based software to remotely control the temperature of their hatcheries find their cooling systems manipulated, killing their animals.  These are not purely hypothetical. China’s environmental efforts are being thwarted by companies doctoring surveillance camera footage and remotely altering or deleting undesirable information in automatic monitoring systems in order to appear more environmentally friendly than they are. A smart irrigation system in Israel has reportedly been hacked by the Syrian Electronic Army, a hacker group. Smart irrigation systems in general can be manipulated to empty entire water reservoirs or apply the wrong amounts of fertilizer and chemicals.
“Workers must reach punishingly high rates, with each act measured for efficiency and quality,” writes Martin Harvey, an Amazon warehouse worker and graduate student. “The impacts of this process on human bodies and minds is horrific: joint pain, carpal tunnel, blown backs, anxiety, and depression are all common aspects of the work.”  “There is no way to do a job without being ‘creative’ with ‘Do your job safely, do it correctly, but make rate,’” reads a Reddit post on a thread about safety hazards at fulfillment centers. And because inaccuracy and inefficiency can cost you the job, workers are implicitly encouraged to skirt the safety rules.  Ashleigh told me she frequently ignored aches and pains and injuries while on the job. “You smash your finger in a crate, you’re going to hold your breath and keep going,” she said, “Because otherwise, A, they’re going to find a way to tell you it’s your fault or, B, if you stop and complain, go to the [medical] office, that will affect your rate.” Then it will be up to a manager’s discretion whether the note from AmCare, Amazon’s in-house medical office, is enough to give you a break on your numbers.  When they are severely injured on the job, a Guardian investigation found, employees have had to fight Amazon for worker’s comp. Michelle Quinones of Fort Worth, Texas was sent back to the warehouse floor from AmCare at least ten times after reporting carpal tunnel pain. When her wrist finally needed surgery, Amazon’s workers’ comp insurer fought her for over a year before paying for the procedure.  Anxiety and severe stress about meeting rate is also ubiquitous in the online forums and groups I visited. “Anybody else have nightmares and stress about not hitting rate?” reads a post with dozens of responses on an Amazon warehouse subreddit. “I constantly dread going to work… I hate stowing and I can’t get better no matter how hard I try… I drive home exhausted and lay in bed stressed about how I’m going to do the next day. I’ve been here 6 months almost, surprise I’m still even employed… I love amazon, but then I hate it. I just can’t do it and the stress is killing me.”
Some responses offer strategies for stowing quickly, while others debate productivity-enhancing substances. “Caffeine helped me. Red Bull, monster rockstar and now I’ve discovered 5 hour energy,” says one worker. “No the energy drinks just make you sweat a lot more and cause anxiety,” counters another. “CBD oil… might help (after work before bed) with stress and anxiety. It[’]s not illegal and won[’]t fail any drug tests it[’]s not like THC.” Public forums like these (and many private ones elsewhere on the internet) are collaborative spaces where agitation — shared expressions of anger and grievance — can congeal into solidarity. At a company that is notoriously parsimonious about Time Off Task, forums function as de facto break rooms where workers commiserate, complain, and perhaps even entertain collective action.
Business as Usual In the wake of Five Star’s 2017 leadership election, and the hacks by Luigi Gubello and rogue0, the Italian Data Protection Authority conducted an investigation into Casaleggio Associates for breaking data privacy laws. The hacks had revealed that the company was collecting an extraordinary amount of members’ personal data, along with their voting records, and combining this with data gleaned from members’ social media accounts. It’s the sort of information that could potentially be used to create highly targeted messaging for every member of the Five Star party, thus deepening Casaleggio Associates’ centralized control over the ostensibly popular movement.
Around the same time as that investigation was underway, Five Star took legal action against Gubello, whom party leaders now accused of being rogue0. In January 2018, Italian police tracked down Gubello at his girlfriend’s house in Trieste and examined his phone and computer. Meanwhile, Five Star claimed Rousseau’s vulnerabilities had been addressed, and described the platform as a “fortress.” But in February, just a month before the 2018 Italian general election, rogue0 struck again, showing that they could still access the Rousseau database and take control of administrator accounts. In March, Five Star won a plurality of votes in the general election, and formed a coalition government with a nativist right-wing party called Lega, or the League. Di Maio was made the Minister of Labor and Social Policies, the Minister of Economic Development, and the Deputy Prime Minister. (He now serves as the government’s Minister of Foreign Affairs.) Rogue0’s identity has not been revealed, though the journalist Jacopo Iacoboni has perhaps come closest to discovering it. He has had a handful of conversations with the hacker via direct messages on Twitter. Iacoboni says it was rogue0 who reached out to him. “I never knew his real identity,” Iacoboni told me, assuming the hacker was one person, and a man. “But he seemed to me someone who knew people at Casaleggio Associates very well—this was my impression after our Q&A.” Today, Rousseau’s Movable Type content management system is gone. User passwords are now allowed to be longer than the previous requirement of just eight characters. “In fact, following orders by our [Data] Protection Authority, the Rousseau website was practically rewritten from scratch,” noted security expert Fabrizio Carimati. But it’s not clear exactly what’s replaced Five Star’s old systems. “Casaleggio now claims that the platform has been completely redone,” Iacoboni said, before noting that this is impossible to verify.   But the vulnerabilities go deeper than software. Luigi Gubello believes that for voters to put their faith in digital democracy, the systems being used need to be transparent. Yet algorithms, coding, cryptography—the building blocks behind a digital democracy—require expertise to fully grasp. “People must be able to trust the election process,” Gubello insists. “Can you trust it if you cannot understand and really check it?”
After being denied another promotion, one that she’d earned several times over, she eventually learned that the men evaluating her were resigning from the promotions board rather than making a decision on her case. “They disapproved on principle of women holding managerial posts,” she found out, so they would rather resign than consider her for a promotion. “I was devastated by this: it felt like a very personal rejection,” she recalled.
After hitting the glass ceiling first in government and then in industry, Shirley did what women were supposed to do—what the two Annes and so many other women had been encouraged to do—she got married and resigned from her position. But, unlike the Annes, she wasn’t happy about it. She still had the skills, the intelligence, and the drive to work in computing, and she knew many other women who were in the same situation—being stymied in their careers not because they weren’t good enough, but because they were women.
How? One way is by encouraging and enabling different government agencies to pool information. Under the social credit system, several government bureaus have not only developed their own blacklists, but to date have signed forty memoranda of understanding that enable them to share information with one another to ensure that blacklisted individuals are duly punished. Another way that the social credit system strengthens blacklists is by fostering closer communication not just within government but between government and industry—in particular, with China’s biggest technology firms. Previously, people who could not purchase airline tickets through official channels if they were blacklisted might have still managed to use websites like CTrip.com or the in-app travel booking feature of the mobile wallet service Alipay to circumvent these restrictions. That is no longer possible under agreements that these companies and several dozen others have signed with China’s National Development and Reform Commission (NDRC), the powerful government body that has spearheaded the social credit system’s development. According to available government and state media reports, one type of agreement, called “information sharing,” involves companies receiving government-issued blacklists, which they then match to their user base in order to prevent blacklisted people from performing certain activities like buying airplane tickets. Another kind of agreement, called “joint rewards and punishments,” restricts the behavior of blacklisted individuals even further: blacklisted users of Alipay, for instance, are unable to buy so-called “luxury items”—although it is unclear whether it is Alipay or the government that determines which items fall into this category.   By forging partnerships with Chinese technology companies, the state ensures that blacklisted people can’t avoid punishment. These partnerships make it harder for individuals to evade restrictions in the non-state economy, which is otherwise farther outside of the government’s sphere of control. But they also open the door to the possibility of a much more expansive social credit system, since technology companies have a wealth of information about Chinese citizens. Still, it remains unclear when and how companies might share their data with the state—although it would be difficult for them to avoid doing so if asked.
Webs of Trust In addition to blacklisting, China also has “red-listing.” This involves identifying people whose behavior is considered exemplary of “trustworthiness,” which includes paying bills and taxes on time or, in some cities, doing volunteer work and donating blood. There are also more specialized examples of how rewards are granted: the government has a national “action plan” for encouraging young people to do volunteer work, and those volunteers recognized as outstanding are red-listed. The benefits they receive as a result include having their job applications to Tencent prioritized, paying discounted mobile phone rates through Alibaba, getting coupons for shopping on Alibaba’s ecommerce site TMall, and enjoying free accomodations for 300 overseas volunteers in an AliTravel-sponsored program. The state may at some point also share information about their “red lists” with tech companies in order to confer more benefits. For instance, ride-sharing behemoth Didi Chuxing has partnered with the NDRC “Xinyi+” (信易+, akin to “credit convenience”) project, which may begin to offer red-listed riders discounts, priority booking of cabs, and deposit-free bike rentals. In some instances, blacklists are adapting to new media while retaining their original function of shaming people into changing their behavior. The enormously popular social video streaming app TikTok (抖音, douyin) has partnered with a local court in Nanning, Guangxi to display photographs of blacklisted people as advertisements between videos, in some cases offering reward payments for information about these people’s whereabouts that are a percentage of the amount of money the person owes. Much like the other apps and websites that take part in these state-sponsored efforts, TikTok does not disclose in its user-facing terms of service that it works with the local government of Nanning, and potentially other cities, to publicly shame blacklisted individuals.
Supernatural justifications for treatment techniques eventually ceded to pseudoscientific ones; prayer was replaced by bloodletting and cocaine (and more prayer). Wilhelm Fliess engaged in surgical trial-and-error on his collaborator Emma Eckstein. His friend Sigmund Freud institutionalized female hysteria. Franz Joseph Gall performed backbends to legitimize racism via phrenology.
Then, in 1910, the Flexner Report caused a paradigmatic shift in medical education. Abraham Flexner was not a doctor, but a secondary school principal from Louisville, Kentucky, who later joined the Carnegie Foundation for the Advancement of Teaching. It was there that he wrote “Medical Education in the United States and Canada,” and transformed the lives of millions of people.
Take the mutual fund industry. It has more than a hundred thousand employees in the US. And every one of those jobs is at risk from the realization that the economic value of those funds is replicable with the right computer systems. For the moment, those jobs are sustained by inertia, or they are sustained by a story about why a certain manager is going to make you more money than an index fund. But that’s changing. That change will play out over the next couple of years.
Take the big money managers in Boston like Fidelity and Putnam. Those are old, large institutions. Effectively all of those jobs are at risk unless they evolve fast. And even if they do, automation will cut deep. Hedge funds, same thing. Some of them will be able to eke out value from the development of new techniques, but everyone else will be replaced by computers.
What they were offering is what Silicon Valley calls a “talent acquisition.” They wanted to buy the company, but they weren’t going to take our technology. They weren’t going to take our code. They just wanted to get the two of us working for them. They put me and my cofounder through an interview process, and it was the same standard interview process I’d already been through before. But at the end of it, they made an offer with a slightly different format. It was clear that we were talking to people who worked in their mergers and acquisitions (M&A) department rather than in recruiting. Those people have the authority to write bigger checks, and they’re supposed to be thinking about what the company is going to need a little further down the line.
We were very green, so we didn’t know what a good offer looked like. But we did know that we should go get at least one competing offer, to see what our options were. So, using our network of contacts from college, we reached out to somebody who had the ear of a few VPs at a bigger company that made similar acquisition offers. We said, “Hey, we have this offer from one of your competitors. They’re moving fast. Can we start a conversation with you?” Recruiters famously don’t make the process very fast for people. You can do an interview and not hear back for a month, for instance. It’s frustrating for most people coming in the front door. But it’s very different if you have an offer from a competitor. An offer from a competitor is always the best way to get their attention, even if they have ignored you in the past. And doubly so through the M&A route.
But it is also easy to have a conversation with ELIZA that goes nowhere: HOW DO YOU DO. PLEASE TELL ME YOUR PROBLEMI have so much work to do.THAT’S QUITE INTERESTINGNot really.THAT IS INTERESTING. PLEASE CONTINUEIt’s not.DO YOU FEEL STRONGLY ABOUT DISCUSSING SUCH THINGS It usually becomes clear to modern users within minutes how fragile ELIZA’s illusion of real conversation is. In the late 1960s, however, the program left a lasting impression. The astrophysicist and science popularizer Carl Sagan, in a 1975 essay for Natural History magazine, predicted a time in which such therapy would be commonplace, delivered through “a network of computer psychotherapeutic terminals, something like arrays of large telephone booths.” Even established mental health professionals began to seriously consider the idea of computerized therapy. Perhaps most notable was the psychiatrist Kenneth Colby, who later developed his own mental health chatbots and once told a reporter, “after all, the computer doesn’t burn out, look down on you, or try to have sex with you.”  Weizenbaum himself believed that ELIZA only demonstrated that computers did not have to actually understand anything in order to mimic everyday conversations. The year after ELIZA was released, his more famous colleague at MIT, Marvin Minsky, declared that “within a generation, the problem of creating ‘artificial intelligence’ will substantially be solved.” But ELIZA helped Weizenbaum to develop a more skeptical view of computer science, and of the relationship between computer and human intelligence.
Weizenbaum argued that even if a future computer were powerful enough to perform automated therapy, it would still be wrong. Human intelligence and computer logic are fundamentally different processes and wholly “alien” to one another, he wrote in his 1976 book Computer Power and Human Reason. As Zachary Loeb explains in the introduction to Islands in the Cyberstream, a posthumously published interview with Weizenbaum, “Computers excelled at tasks involving quantification, but for Weizenbaum there was much about human beings that simply could not be quantified.” As tempting as it was for computer scientists to believe that computers could model the world around them or human thought, in truth they could only create their own separate reality. “If Weizenbaum called for renunciation of the computer, in certain contexts,” Loeb continues later, “it was because the embrace of the computer in all contexts had led to a renunciation of the human.” In her paper “Authenticity in the age of digital companions,” Sherry Turkle, a fellow MIT professor who taught classes with Weizenbaum, recounted how ELIZA’s reception informed his stance: Weizenbaum found it disturbing that the program was being treated as more than a parlor game. If the software elicited trust, it was only by tricking those who used it. From this viewpoint, if Eliza was a benchmark, it was because the software marked a crisis in authenticity: people did not care if their life narratives were really understood. The act of telling them created enough meaning on its own.
The stakes for false positives and false negatives differ, depending on the context and audience. While marking the occasional baby photo as pornography is one kind of problem for the users involved, incorrectly identifying black skin in ways systemically different from white skin is a different kind of problem—a public problem about representation and equity, rather than a consumer problem about efficiency and inconvenience.
These platforms now function at a scale and under a set of expectations that increasingly demand automation. Yet the kinds of decisions that platforms must make, especially in content moderation, are precisely the kinds of decisions that should not be automated, and perhaps cannot be. They are judgments of value, meaning, importance, and offense. They depend both on a human revulsion to the horrific and a human sensitivity to contested cultural values.
In this issue you’ll find Marxists, Wynterians, Black speculative fiction, poetry written inside a cage, a graphic story about internet shutdowns in Kashmir, abolitionists, and the unaffiliated. In this issue you’ll find many beacons because, like Neta Bomani’s tween zine insists, we need to move beyond The Way. As guest editor, I chose to curate love letters over a manifesto—because I know plans and leaders get captured or beheaded, but we can nourish an otherwise set of relations to each other while we strategize on getting free.  Postscript by Ben Tarnoff One December morning in 2020, I DM’d Khadijah on Twitter. We’d never spoken before, but I’d just read a recent essay of hers, “On the Moral Collapse of AI Ethics,” and loved it, and wanted her to contribute to Logic. She said she’d be in touch with some further thoughts.
A couple weeks later, she followed up by email. What she really wanted to do wasn’t write a piece, she said, but edit a whole issue: I’ve been thinking about concrete next steps to move beyond calling out the failure of the status quo to providing an alternate beacon for people who are looking for space to build and think critically, take risks and specifically room to think about currently under resourced domains ie tech/data policy in the global south, grassroots response beyond the right to refuse surveillance, bringing in agroecology, the core of Black studies (ie not just citations for bias but the epistemic and historical challenges being raised at the forefront of the field) etc.
The aggregate product of all this labor is extremely valuable for Foursquare’s advertisers and investors, at least in theory. The company is reportedly on track to hit $100 million in annual revenue in 2018, and its 2016 fundraising round gave it a valuation of $325 million. By industry standards, this isn’t even particularly high: its rival Yelp, which is publicly traded, has a market capitalization of over $3.5 billion. Google purchased Zagat in 2011 for $151 million, after infamously failing to acquire Yelp, and both Apple and Facebook have their own proprietary ratings of real-world businesses.
Schrödinger’s Latte Power users and reviewers don’t just create value for companies like Foursquare, however. Their labor also enriches the developers, brokers, landlords, and small business owners that profit from the gentrification of urban space. That’s because LBS don’t just measure and map the city—they transform it.
And if that’s all there were to it, a doctor using an EMR would be no more worrisome than an accountant switching out her paper ledger for Microsoft Excel. But underlying EMRs is an approach to organizing knowledge that is deeply antithetical to how doctors are trained to practice and to see themselves. When an EMR implementation team walks into a clinical environment, the result is roughly that of two alien races attempting to communicate across a cultural and linguistic divide.
When building a tool, a natural starting point for software developers is to identify the scope, parameters, and flow of information among its potential users. What kind of conversation will the software facilitate? What sort of work will be carried out? This approach tends to standardize individual behavior. Software may enable the exchange of information, but it can only do so within the scope of predetermined words and actions. To accommodate the greatest number of people, software defines the range of possible choices and organizes them into decision trees.
The arc of this story reflected the simplified, and sometimes entirely fictional, assumptions of Forrester’s model. On its most basic level, Urban Dynamics modeled the relationship between population, housing stock, and industrial buildings against a background of government policies. The city inside Forrester’s model was a highly abstracted one. There were no neighborhoods, no parks, no roads, no suburbs, and no racial or ethnic conflicts. (In fact, the people inside the model didn’t belong to racial,ethnic, or gender categories at all.) Economic and political life in the outside world had no effect on the simulated city. To the extent that the world outside the model existed, it served only as a source for migrants into the city, and a place for them to flee to if the city became inhospitable.
The residents of Forrester’s simulated city belonged to one of three class categories, “managerial-professional,” “worker,” and “underemployed.” As one moved down the class ladder in the urban dynamics model, classist assumptions about the urban poor piled up: birth rates were higher, tax contributions were lower, and the use of public expenditures increased. This meant that the urban poor served as a massive drag on the health of the simulated city: they did not add to its economic life, they had large families which strained public services, and they contributed only paltry amounts to the city’s coffers.
Thus it was natural for the Irish to be paid less and live in appalling slums. It was natural for women to be paid less while also performing the unpaid work of raising children — children who went into the mills as young as five. It was natural to enslave human beings of African origin and put them to work harvesting the cotton that those mills turned into textiles. It was natural to dispossess and exterminate the Indigenous people who had formerly inhabited the land that became those cotton fields.  Capitalism doesn’t invent human difference, of course. Humans look different; they speak different languages; they come from different communities and cultures. But capitalism makes these differences make more of a difference to people’s lives. Differences become more differential. They become differences of capacity and value — differences in how much a human being is worth, or if they’re even considered human at all.  The political scientist Cedric J. Robinson argued that this difference-making has been a core feature of capitalism from the beginning — he called it racial capitalism for this reason. Feudal Europe was highly racialized, Robinson said. As Europeans conquered and colonized one another, they came up with ideas about racial difference in order to justify why, for instance, Slavs should be slaves. (In fact, Slavs were so frequently enslaved in the Middle Ages that they supplied the source of the word “slave,” in English and several other European languages.) If racial thinking saturated the societies where capitalism first emerged, capitalism subsequently picked up these concepts and extended them. It generated deeper and more varied ideas about racial difference in order to justify the new relationships of domination that the imperative of accumulation demanded — particularly as Europeans began carving up Asia, Africa, and the Americas. “The tendency of European civilization through capitalism,” Robinson wrote, “was thus not to homogenize but to differentiate — to exaggerate regional, subcultural, and dialectical differences into ‘racial’ ones.”
Robinson’s insight helps clarify another crucial aspect of how tech operates. If tech intensifies capitalism’s contradiction between wealth being collectively produced and privately owned, it also intensifies capitalism’s tendency to slice people into different groups and assign them different capacities and values. Indeed, the two operations are closely related. “Capital can only be capital when it is accumulating,” says the theorist Jodi Melamed, “and it can only accumulate by producing and moving through relations of severe inequality among human groups.” The network for making wealth, in other words, relies on the engine for making difference.  That engine is now made of software. Differentiation happens at an algorithmic level. The abundant data that flows from mass digitization, combined with the ability of machine learning algorithms to find patterns in that data, has given capitalism vastly more powerful tools for segmenting and sorting humanity.
I would say the same exact thing in my explanation of why the term ethnicity must die. The thing is, the work “ethnicity” does in America is perhaps different than in other parts of the world. The most contention is around the category of Hispanic, when they have you fill out demographic forms around race and the options presented are Hispanic or Black—what the fuck is Hispanic? Who agreed to even be from Spain?  What to do about Logic (and Kevin)? Shout out to Logic. I’m appreciative that they let me hijack this shit, right? But I’ve definitely been thinking about what it means to hijack Logic, because “logic” has been so central to the dominant critique of mis/disinformation—that these ignorant, economically anxious white actors are illogical. They’re not pledging allegiance to science and are undermining the Enlightenment rationality that we fought for, that our forefathers fought for—though maybe they shouldn’t have committed genocide against Indigenous people and enslave the Blacks along the way. But they say, “We recognize that, we make an acknowledgment of past harm,” and now let’s focus on logic. So, how do we intervene in the context of this way of thinking about logic, enlightenment, and rationality?  The best thing we can do is establish the validity of alternative epistemological standpoints. What I mean when I say that is that every culture approaches their version of reality differently. Whether it’s geographic, whether it’s genetic, whether it’s environmental, whether it’s political—they all approach it differently. And for the last five or six hundred years, we’ve been forced to endure a world that is structured by a white disavowal of their own embodied consciousness, disdain for women, and anti-Black racism. Those three things are the pillars of what whiteness is, so rationality is a disavowal of not just feelings but also a disavowal of the role that women have in the decision-making process, because women under rationality are considered hysterical.
If you take women out of the decision-making process, you basically get the thoughts of white men. Under whiteness, white men are valued for their ability to resist their dark desires, their empathy, and their care, because they’re making “unemotional” decisions about how to apportion resources. That comes directly back into the data science that we’re arguing with and about, because, to them, the most elegant code is the code that is beautiful in its simplicity and its aesthetic minimalism. The most elegant code also does things to social situations that seem as if they are situations devoid of emotional resonance.  So we could talk about social welfare algorithms where people are now being asked to fill out entire questionnaires about what type of toothpaste they use, because their answer to that question will be put into a database and used to calculate that they are not deserving of welfare benefits because they have too good a taste in toothpaste. How dare you have sensitive teeth? You don’t deserve Sensodyne, you better go get you some Arm & Hammer toothpaste for a dollar! So it’s this continual asceticism, this denial of the pleasures, or even the denial of the experience of the visceral, of the libidinal. That is one of the core functions of whiteness.  One of the most interesting trends during the early stages of the pandemic was that African countries were not experiencing Covid-19 at the same rates as white Western nations. And it turns out that it was because these folk, since they had lived with chronic deadly diseases for centuries, had built up protocols for infection control strategies. And there are other examples where people are doing fantastic things for themselves, of themselves, by themselves that are not beholden to a Western paradigm.  But, to go back to an American context, how are we supposed to gain control over these information resources in order to institute a different epistemological standpoint? Because one of the other things that whiteness is good at is denying access to those resources, so that we can achieve—I hate the word sovereignty—some sort of valence of being part of this nation.
CH: I think it was Xiaowei and I collaborating–I remember a lot of back and forth, one person making significant additions to the book, and then pulling those changes into templates and trying to make it more standardized going forward. We had a lot of learnings along the way. Tech Against Trump, which came out between our first and second issues, was incredibly painful because of the footnotes–our original footnote design was really beautiful, but it required manually moving text boxes around. After that, we decided we needed to change the designs so that they’re easier to lay out and figure out some efficiencies that would take out some of the manual labor. XW: When I talk to folks who are doing print publications now they often use Figma or Canva. It’s wild that just in seven years it’s changed so much.
JF: Another thing you need to worry about when having a print object is making sure there’s a barcode for retailers to scan. The unique identifier for the barcode is called an ISBN, and is specific to the format—you’ll need one for print, one for digital. You can buy the ISBNs online, which we did at first one-by-one even though they’re cheaper in bulk because we had a very limited budget. Then, we just used a free online barcode generator to slap it on the cover.
I’ve given interviews and talks about this in the context of the virality of videos of police murders of unarmed African Americans. No matter that these videos contribute to a culture of sustained trauma and violence against African Americans, they are heavily trafficked because they generate a lot of views and a lot of web traffic.
In the end, these companies are beholden to their shareholders and they’re committed to maximizing profit before all else—and these videos contribute to a profitable bottom line. But we need more than just maximizing profit as the value system in our society. So, engineers may not be be malicious, of course. But I don’t think they have the requisite education in the humanities and social sciences to incorporate other frameworks into their work. And we see the outcomes of that.
On the organizing tradition question, I have to ask: have you read the Wikipedia article on Poznań?  No.  I was reading it ahead of our conversation and it links to an article about these protests in 1956.  Oh, yeah.  The government raised the work quota so that it wouldn’t have to pay workers at the Cegielski metal factory their full compensation. Workers responded by walking out one morning in what turned into a march of 100,000 people. Raising an arbitrary quota in order to lower pay sounds like something Amazon would do, so I was curious if that history impacts organizing in your city today.  It’s interesting that you mention this. Our union is just one section of a larger union called Workers’ Initiative. Workers’ Initiative was started in the early 2000s by workers at the Cegielski factory. What happened in that factory in 1956 was a massive moment in the history of organizing against the Communist state, and it was also connected to the Hungarian Uprising later that year. Those workers faced harsh retaliation and eighty protesters were killed. What we’ve done doesn’t compare, but we are inspired by that history.
To give you a bit more of the historical context, there was a transformation in Poland when the old regime collapsed. The new regime that came into power in the 1990s was basically shock therapy for working-class people, and all the unions supported it, including Solidarność leaders who used that period to get into politics. In the 2000s, young workers at the Cegielski factory had had enough and decided that they wanted a new form of labor organizing. Workers’ Initiative came out of that. We are inspired by that tradition and the rejection of the big unions that supported company “restructuring,” which always meant dismissals.  So we are connected to that factory emotionally, but there’s another connection as well. The factory had 20,000 workers in 1956. Now it has something like 800. The old working class that made up the heavy industry sector—that factory makes engines—was destroyed in Poland in the 1990s and 2000s. Our union had a lot of discussions about what the new field of working-class formation would be. As Poland has become a big warehouse for Western Europe, we’ve come to think that logistics will be the crucial sector for the future of the labor movement.  So yes, we know the story of 1956 and we’ve tried to learn from it.
What would’ve been more meaningful? I could have worked on something that actually had a positive impact on society. Or I could’ve set out to solve a problem that had more interesting technology behind it. Making the World A Better Place What do you think changed your mind? Because you didn’t feel that way at first.
I think it was seeing more of the industry, spending time at these big companies, and observing certain things. One of those had to do with bias. Silicon Valley has been under attack for the past several years for having a lot of bias in its compensation practices around gender and race. As a result, the big companies have tried to standardize the way they pay people. They’ll break compensation into salary bands that are supposed to match an employee’s level. So if you’re at a certain level, your pay falls somewhere within the corresponding salary band. Which means that if a man and a woman are in the same band, they’re going to be paid roughly the same.
Abolitionist Futures You mentioned earlier that your goal is to abolish these systems, not reform them. What does an abolitionist campaign against a carceral technology look like? I’m working on a campaign right now in Portland to ban both the private and public use of facial recognition technology. A handful of cities have banned the use of facial recognition by local government entities like police departments, but private businesses have been unaffected. The Portland ban would extend to the private sector too.
It's been controversial because a lot of people who are civil rights oriented have been worried that you're infringing on an individual's ability to use this technology if they want to. But if you're organizing from an abolitionist perspective, you recognize that the private rollout of this technology is still a carceral technology. These technologies never exist without their carceral counterpart. Take the introduction of face-scanning software to unlock people’s phones. Industry rolls out these artifacts of private consumption that normalize the existence of these technologies—technologies that have always been entangled in carceral systems.  We recognize facial recognition technology, a weapon used by law enforcement to identify, profile, and enact violence against categories of people. So individuals opting in to unlock their phones with facial recognition serves to improve a technology that has necessarily violent implications—individuals opting in, in other words, are participating in the creation and refinement of this weapon. So when we organize to abolish these technologies, we organize against their conditions of possibility as much as their immediate manifestation. We organize against the logics, relationships, and systems that scaffold their existence.
A young woman named Anne Davis wears a punch tape dress at her “retirement party” as she leaves her job to get married. In the real world, this juggling act was difficult, and often impossible. Middle-class women who had the mettle and the privilege to try it encountered major obstacles. And the fact that it was still seen as completely inappropriate for women to hold authority over men in the workplace meant that women who struggled to stay in their jobs rarely got very far. So the real-life Anne celebrated her retirement with half a dozen other young women computer workers—most of whom would go down the same path in the next few years.
Collective Failures When we talk about computing history, we rarely talk about failure. Narratives of technological progress are so deeply ingrained into our ways of seeing, understanding, and describing high tech that to focus on failure seems to miss the point. If technology is about progress, then what is the point of focusing on failure? Up until recently, we also rarely talked about women in relation to computing.
Documentation is hard—and if it doesn’t exist, I end up paying the same contracting company over and over to learn the business logic and reasoning behind a specific system. That’s the flaw of this approach. To your point about people, COBOL and mainframe systems have been in the news lately as state unemployment applications struggle under the historic load of applicants from the pandemic. A lot of folks are demonizing the languages, but my opinion is that it’s not just about the language or the technical systems—there’s an important social component, too. What is your take on that? I actually got into a very angry discussion with a few of the protectors of COBOL. I described what happens to COBOL in its lifecycle. Every language has a lifecycle for large systems. COBOL has a very specific one.
Mainframe systems have one primary benefit, which is a horrible trap: you can set the system up, leave it alone, and it will keep running, forever. What you get is forty-year-old codebases that keep doing their job and management doesn’t bother paying for it. Why would you? I can get away with paying less money and the system will keep producing, so I don’t need to bother maintaining it. But then you have code that was written forty years ago by people who have since retired.
Three hundred thousand windows automatically lit up; the smart sensors understood the moods of the men and women coming home and automatically adjusted the temperature, the color of the lighting, the channels showing on the TVs or the music playing through the sound systems; five thousand restaurants received automatically generated take-out orders; the health monitoring systems synced up with the body films, and, based on dozens of parameters such as body temperature, heart rate, caloric intake/consumption, galvanic skin response, suggested plans for the next day’s activities. Exhausted face after exhausted face.
The offices in the skyscrapers were lit bright as day. The giant eye zoomed in and observed a hundred thousand faces staring at computer monitors through closed-circuit cameras; their tension, anxiety, anticipation, confusion, satisfaction, suspicion, jealousy, anger refreshed rapidly while their glasses reflected the data jumping across their screens. Their looks were empty but deep, without thought of the relationship between their lives and values, yearning for change but also afraid of it. They gazed at their screens the way they gazed at each other, and they hated their screens the way they hated each other. They all possessed the same bored, apathetic face.
The Brain Doctor The second morning of my experiment, I took the subway uptown to see Dr. Kamran Fallahpour, an Iranian-American psychologist in his mid-fifties and the founder of the Brain Resource Center. The Center provides patients with maps and other measures of their cognitive activity so that they can, ideally, learn to alter it.
Some of Fallahpour’s patients suffer from severe brain trauma, autism, PTSD, or cognitive decline. But many have, for lack of a better word, normal-seeming brains. Athletes, opera singers, attorneys, actors, students—some of them as young as five years old—come to Fallahpour to improve their concentration, reduce stress, and “achieve peak performance” in their respective fields.
It was in this spirit of professionalization that industry leaders encouraged programmers to embrace the mantle of “software engineer,” a development that many historians trace to the NATO Conference on Software Engineering of 1968. Computer work was sprawling, difficult to organize, and notoriously hard to manage, the organizers pointed out. Why not, then, borrow a set of methods (and a title) from the established fields of engineering? That way, programming could become firmly a science, with all the order, influence, and established methodologies that comes with it. It would also, the organizers hoped, become easier for industry to manage: software engineers might better conform to corporate culture, following the model of engineers from other disciplines. “In the interest of efficient software manufacturing,” writes historian Nathan Ensmenger, “the black art of programming had to make way for the science of software engineering.” Chasing Waterfalls And it worked—sort of. The “software engineering” appellation caught on, rising in prominence alongside the institutional prestige of the people who wrote software. University departments adopted the term, encouraging students to practice sound engineering methodologies, like using mathematical proofs, as they learned to program. The techniques, claimed the computer scientist Tony Hoare, would “transform the arcane and error-prone craft of computer programming to meet the highest standards of the engineering profession.”  Managers approached with gusto the task of organizing the newly intelligible software labor force, leading to a number of different organization methods. One approach, the Chief Programmer Team (CPT) framework instituted at IBM, put a single “chief programmer” at the head of a hierarchy, overseeing a cadre of specialists whose interactions he oversaw. Another popular approach placed programmers beneath many layers of administrators, who made decisions and assigned work to the programmers under them.
With these new techniques came a set of ideas for managing development labor, a management philosophy that has come to be called (mostly pejoratively) the “waterfall method.” Waterfall made sense in theory: someone set a goal for a software product and broke its production up into a series of steps, each of which had to be completed and tested before moving on to the next task. In other words, developers followed a script laid out for them by management.  The term “waterfall,” ironically, made its first appearance in an article indicting the method as unrealistic, but the name and the philosophy caught on nevertheless. Waterfall irresistibly matched the hierarchical corporate structure that administered it. And it appealed to managers because, as Nathan Ensmenger writes, “The essence of the software-engineering movement was control: control over complexity, control over budgets and scheduling, and, perhaps most significantly, control over a recalcitrant workforce.” This was precisely the kind of professional that waterfall development was designed to accommodate.
Fifty cents, one fumbling call on the observatory’s payphone, and twenty minutes spent swinging my legs on the sunshiney bench outside the gift shop and a black SUV is rolling up containing Carmen and Diane. I have kept my phone totally off and been sure to wear clothes that have not been laundered with scented laundry detergent, per Diane’s instructions.
There are two kinds of electrosensitive people, I decide—the ones who seem normal, but for God’s grace go I, and the ones who don’t. Carmen is the former, Diane is the latter. Part of the electrosensitive PR problem is that the strange ones get so much of the air time. “Well hello there,” Diane says from the passenger seat, “I thought you would have called sooner, I’ve been waiting the whole morning.” Long grey hair in pigtails, Tevas worn with socks, loose slacks.
Do you think this techno-utopian tradition runs as deep in the tech industry today as it did in the past? It varies depending on the company. Apple is, in some ways, very cynical. It markets utopian ideas all the time. It markets its products as tools of utopian transformation in a countercultural vein. It has co-opted a series of the emblems of the counterculture, starting as soon as the company was founded.
At other companies, I think it’s very sincere. I’ve spent a lot of time at Facebook lately, and I think they sincerely want to build what Mark Zuckerberg calls a more connected world. Whether their practice matches their beliefs, I don’t know. About ten years back, I spent a lot of time inside Google. What I saw there was an interesting loop. It started with, “Don’t be evil.” So then the question became, “Okay, what’s good?” Well, information is good. Information empowers people. So providing information is good. Okay, great. Who provides information? Oh, right: Google provides information. So you end up in this loop where what’s good for people is what’s good for Google, and vice versa. And that is a challenging space to live in.
8/8/08 —  The 2008 Olympics in Beijing opens at 8:08 p.m. on August 8, 2008, in a country where the number eight is considered lucky. When protests against Chinese practices in Tibet and elsewhere greet the Olympic Torch Relay around the world, nationalism surges online around slogans like “Red Heart China,” signaling both love of country and communist red.
2009 – Users are blocked from using profanity or discussing politically sensitive topics online, so they use clever homophones to circumvent censorship. One popular homophone is “grass-mud-horse” (草泥马 cǎonímǎ), which sounds like  “fuck your mother“ (肏你妈 càonǐmā).   The coinage is popularized in a viral video that uses alpacas to illustrate the invented grass-mud-horse, and pictures of alpacas become a symbolic f-you to government censors.
A day before the election, an anonymous hacktivist who goes by the handle “suppermario12” hacked into the samara.kg server. Suppermario12 moved the domain to their own server, so that anyone who tried to access the site for the next ten hours saw a video that explained the mechanics of how Jeenbekov was using samara.kg to influence the outcome of the election. They also leaked information gathered from the server to journalists at several local news outlets, including the site Kloop.kg, which ran an extensive investigative report.
How did the samara.kg system work? Previously, so-called “campaigners” had to go door-to-door to buy votes. Often, jaded Kyrgyz citizens would take the cash regardless of whether they were actually registered to vote. Samara.kg helped campaigners eliminate this problem by making it easier to check someone’s eligibility to vote. Since 2015 only citizens who have submitted their biometrical data can vote in Kyrgyzstan. Samara.kg had a copy of all this data, which campaigners used to ensure that the votes they were buying were genuine. Samara.kg also tracked which voters had already voted, preventing citizens from selling their vote multiple times.
The metaphors we use to describe the internet matter. In the 1990s, as the internet gained steam in the United States, one metaphor reigned supreme: “the information superhighway.” Bill Gates, for one, wasn’t a fan. “The highway metaphor isn’t quite right,” he wrote in 1995. It put too much emphasis on infrastructure—the material stuff and institutions that make the internet work. It evoked governments and implied that they should have a hand in maintaining it.
“A metaphor I prefer,” Gates continued, “is the market.” The internet wasn’t a highway at all, he argued, but more like the New York Stock Exchange—a supposedly self-organizing system generated by individuals pursuing their own interests. It would facilitate “friction-free capitalism” and realize the dream of Adam Smith’s “ideal market.” Since 1995, this laissez-faire vision of the internet has helped produce our digital world. Generations of digital capitalists embraced the internet and found innumerable ways to monetize its growth: selling access, leveraging its reach to move goods, marketing gadgets to tap into it more easily. Recently, however, digital capitalism has coalesced around one business model in particular: data extraction. Social media, search engines, and email accounts come for free in exchange for personal data that the platforms monetize.
So what is the point of this (admittedly fascinating) psychomedical history? The point is that there’s no such thing as a tool of measurement that merely “measures.” Any measurement system, once it becomes integrated into infrastructures of power, gatekeeping, and control, fundamentally changes the thing being measured. The system becomes both an opportunity (for those who succeed under it) and a source of harm (for those who fail). And these outcomes become naturalized: we begin to treat how the tool sees reality as reality itself.  When we look at AGR, we can observe this dynamic at work. AGR is a severely flawed instrument. But when we place it within the context of its current and proposed uses — when we place it within infrastructures — we begin to see how it not only measures gender but reshapes it. When a technology assumes that men have short hair, we call it a bug. But when that technology becomes normalized, pretty soon we start to call long-haired men a bug. And after awhile, whether strategically or genuinely, those men begin to believe it. Given that AGR developers are so normative that their research proposals include displaying “ads of cars when a male is detected, or dresses in the case of females,” it’s safe to say the technology won’t be reshaping gender into something more flexible.
AGR might not be as flashy or obviously power-laden as the Benjamin Scale, but it has the potential to become more ubiquitous: responsive advertising and public bathrooms are in many more places than a psychiatrist’s office. While the individual impact might be smaller, the cumulative impact of thousands of components of physical and technical reality misclassifying you, reclassifying you, punishing you when you fail to conform to rigid gender norms and rewarding you when you do, could be immense.
Every two to four years, venture capital firms embark on a fundraising roadshow. General partners pitch chief investment officers on their ability to secure exclusive access to a high-quality “deal flow” while being responsible stewards of capital for stakeholders like retirees or charitable foundations.
The dynamic isn’t all that different from that of a hopeful entrepreneur seeking seed investment from a few dozen venture firms. Venture firms raise money from institutions, so that founders can raise money from them. The pressure to scale moves up the chain: a startup has to make it big in order to deliver large returns for its venture investor, so that its venture investor can deliver large returns to its limited partners.
150 Equations, 200 Parameters Jay Wright Forrester was one of the most important figures in the history of computing, but he is also one of the least understood. He trained at Gordon Brown’s Servomechanisms Laboratory at MIT, spending World War II designing automatic stabilizers for the U.S. Navy’s radars. After the war, he led the development of the Whirlwind computer, arguably the most important computer project of the early postwar period. This machine, after humble beginnings as a flight simulator, morphed into a general-purpose computer which stood at the heart of the Semi-Automatic Ground Environment (SAGE), a multibillion-dollar network of computers and radars that promised to computerize the U.S. Air Force’s response to a Soviet nuclear attack by streamlining the detection of incoming bombers and automatically deploying fighters to intercept them.
In 1956, with the SAGE system not yet finished, Forrester abruptly changed careers, shifting his gaze from electronic systems to human ones. From the unlikely setting of the MIT Sloan School of Management, he founded a discipline called “industrial dynamics” (later rechristened “system dynamics”). At first, this field focused on creating computer simulations of production and distribution problems in industrial firms. But Forrester and his cadre of graduate students later expanded it into a general methodology for understanding social, economic, and environmental systems. The most famous example of this group’s work was the “doomsday” World 3 model that stood at the center of the landmark environmental text The Limits to Growth, a book that warned of a potential collapse of industrial civilization by 2050.
In 2000, the French government told Yahoo that it couldn’t allow people to sell Nazi memorabilia on its auction site. Selling Nazi memorabilia is illegal in France. Yahoo refused. They argued that they couldn’t possibly determine where their users were geographically located. And the French government said, “Guess what, you’re going to do that—or you’re not going to operate in France.” So Yahoo figured out that they could geolocate users pretty well by IP address. Which is why we now can’t watch Netflix in some countries.
Government has the power to push firms. When you tell them they’re going to lose access to an entire marketplace, they’re going to make it happen. State influence cuts both ways, of course. In other kinds of markets, companies cut different kinds of deals in order to uphold oppressive regimes. In Turkey, Facebook routinely takes down any material that relates to the Kurdistan Workers’ Party (PKK), because that’s a condition of them doing business in Turkey.
Better Products, Better People Counterfeit goods on e-commerce platforms and vulgar content on media apps may seem like distinct controversies. But from the government’s perspective, they are closely connected. Ever since market reforms began in 1978, and the focus shifted toward globalization and economic liberalization, discussions in official media about improving product quality—or pinzhi (品质)—began to happen alongside discussions about improving suzhi (素质), an ambiguous term that encompasses a citizen’s intellectual and moral qualities. Rural Chinese are often seen as particularly in need of improved suzhi, and the government has launched several initiatives toward that end.
One example is the “National Training Plan for Migrant Workers,” an ambitious policy to improve the suzhi of China’s rural-to-urban migrants through a mix of state-run job-training programs and media campaigns at the provincial and municipal levels. The goal was to encourage migrant workers to improve their manners, develop useful skills, and adopt an enterprising attitude to compete in the market economy in the absence of social benefits. Chengdu’s training program, for example, had lessons on municipal hygiene and traffic rules, as well as the “right” ways of sitting, walking, and even dressing for migrant ”new urbanites.” A 2009 program in the town of Zhangpu, Jiangsu Province, included lectures on labor law and job-search etiquette, as well as skills training in the accounting, security guard, and landscaping professions.
Today, Kevin spends a lot of time traveling to doctors. He tells his four other kids, “If that was you, I’d be doing it for you.” I asked Kevin about IBM: if he thinks it’s an accident, if he thinks they’re sorry. “They can do whatever they want just so they can have a buck… families lose because now they have a loved one that’s sick. If I went out and changed the oil in my car and dumped it on the grass I would get in trouble,” he said. When a commercial for IBM comes on TV, he can’t bear to watch it. “This is happening all around America,” he said. No matter if they ever admit what they’ve done, “right is right and wrong is wrong.” James Little worked at IBM in Endicott for fifteen years as a senior operator making chip boards. He worried about leaky machines. Once, he shut down a machine that was spilling chemicals into an overflow tray. His manager chastised him and told him never to do that again. James heard rumors of chemicals dumped in holes in the concrete cellar, pipes with leaks, and train cars that spilled their deliveries of chemicals. Workers around him were getting sick. A girl who worked beside him got a brain tumor. A man in his department had his nostrils “eaten out” from the fumes. “The bottom line,” Little said “was they wanted to get the work out… I think people were sacrificed.” Little became an activist and workplace safety advocate. He talked to the press. His manager told him if his name appeared in the paper one more time he would be fired. He kept his job until the factory shut down.
Such stories aren’t limited to Endicott. There are similar stories wherever IBM manufactured chips. Michael Ruffing and Faye Carlton worked at IBM in East Fishkill, New York. They sued IBM after their son was born blind with facial deformities that prevented him from breathing normally. Candace Curtis, whose mother worked while pregnant with her in the same East Fishkill plant, was born without kneecaps. She is not physically capable of talking. Nancy LaCroix, of IBM Vermont, had a baby girl with bone defects, which caused her brain to protrude from her skull and left her with stunted fingers and no substantial toes. One unnamed child of an employee was born without a vagina.
The subsidence had a particularly marked effect on the Grand Canal, a marvel of early twentieth-century hydraulic engineering. The canal, completed in 1900, ostensibly fulfilled the centuries-old project of draining the city’s lakes, which were seen as the cause of flooding—and impediments to urban expansion. Mexico City was trapped at the bottom of a closed valley with no natural rivers flowing in or out. Stretching over thirty miles, the Grand Canal was designed to collect rain and sewage from the city center and take it first east, towards Lake Texcoco, and then through the mountains of the north, where it would be used to irrigate the agricultural fields of the Valle de Mezquital. There was just one problem: the city’s subsidence meant that the canal rapidly lost its slope in the decades after its completion.
By the 1950s, the Grand Canal’s ability to drain the city was already vastly diminished. Engineers began to fear that by the 1970s, the first section of the canal (built on soft, clay soil) would slope towards the city center rather than away from it—rendering it useless. Without the Grand Canal during a major rainstorm, water would accumulate in downtown and turn it into a virtual lake.
These bad AI romances don’t offer a coherent social critique. Instead, they emphasize the disappointment that men feel when they get rejected by robots. The point isn’t simply that computers can be smart. It’s that people can really fall in love with them, and be just as badly hurt by their indifference as they would if they were human.
The Success Daughter as Fembot Cultural historians have long recognized that stories like Metropolis reflected early twentieth century anxieties about industrialization, mass society, and mass death. So why did the bad AI romance genre emerge when it did? In a word: the Mancession. The genre appeared at a time of rising anxiety about male uselessness and female ascendancy. According to this narrative, men are being rendered superfluous by an economy that no longer needs them while women, empowered by the boardroom feminism of Sheryl Sandberg, are scaling the corporate ladder and displacing their male counterparts.
Ten years ago, you published a book called The Future of the Internet—And How To Stop It. That was 2008. There was a lot of optimism in the air about the internet being a democratizing, empowering force. In your book, you argued that the great value of the internet lay in its “openness,” which made it a uniquely generative technology. But you warned that this openness was being threatened by “appliancization”—the attempt by certain companies to enclose the internet and turn it into a more locked-down, proprietary, and closed-source place.
The mood around technology and the tech industry has changed dramatically in recent years. The optimism has turned to skepticism, even cynicism. What do you make of this shift? And how has it affected your thinking on these issues? I think that the future that I was worried about and wanted to stop has come about.
Your basement is Health Canada-approved?  Yeah, they did a site visit this week.  There are six 3D printers down there and a few tables where we quality-check and package each stethoscope. A full print job produces enough parts to make four stethoscopes and that takes fifteen hours. In other words, every fifteen hours, we get four stethoscopes.  The 3D printed parts are the chestpiece, which is circular and holds something called a diaphragm, which is what we put to the patient’s chest; a “y-piece,” which is connected to the chestpiece by some tubing and lets us create a fork; two ear tubes, one for each prong of the y-piece so that sound can flow to both of the wearer’s ears; and finally a piece called a spring, which supports the ear tubes and keeps them a constant distance apart. We put earbuds on the ends of the ear tubes, but we don’t print those.
For the tubing, we use huge rolls of Coca-Cola fountain machine tubing, actually, and cut them up into the size we need. We’re able to leverage the fact that the FDA approves this kind of tubing for both food and medical devices — the “F” and the “D.” We buy food-safe material and then can get approval to use it for medical stuff.  Finally, our diaphragms — inserts for the chestpiece I described earlier — come from file folders, which are super cheap.  You cut circles out of file folders to make the diaphragms?  Yeah, we use a craft punch. We wanted to make it very simple. What’s the good of a process if you need a $2 million lab to implement it? My entire lab can be put together for about $5,000. And half of my machines I made myself. It works not just on the best equipment you can buy, but on anything you can scrounge together.  Once the pieces have been quality-checked, we put them in plastic packets and seal them. They arrive disassembled. The reason we do that is because we want to change people’s relationship with their equipment.  Speaking of quality-checking and making sure everything works, I know that you published a paper in an open-access medical journal where you describe how you validated the 3D-printed stethoscope against the traditional stethoscope.
Accurate cargo and commodity data could also provide an invaluable window into the flow of commerce. Imagine if you knew at all times which commodities were headed where. You could determine whether a market is about to be flooded or whether a shortage of, say, wheat means the cost of bread is about to skyrocket. “Shipping holds the no-shit, honest truth of what the economy is doing,” angel investor Doug Doan told Institutional Investor in 2016. Traders with special insight into shipping data could identify mispricing in the stock market—perhaps GE’s stock is too high, given the flood of cheap washing machines headed our way. They could then exploit that information gap to pocket the difference in price. Currently, data analysts can make estimates about ocean freight based on vessel type and economic indicators, but these are just educated guesses. If a database was constantly updated as transactions took place, this real-time data would make it possible for those with access to place more aggressive, potentially more lucrative bets.
Given the potential value of this information, the question of who will control or have access to the network of comprehensive shipping data is of urgent importance to many players within the worlds of logistics and finance. Currently, the major shipping lines, like Maersk and Hapag-Lloyd, have privileged access to a great deal of detailed information, because data about their customers and cargo is locked into their technology platforms. National agencies, such as US Customs and Border Protection, hold another large swath of data, but it’s not linked with other datasets. A growing number of startups is attempting to harvest and synthesize various sources of data, but to obtain proprietary data would require the participation of ports and shipping companies.
Closing the Digital Divide The success of Chattanooga’s municipal network is often measured in economic terms. But it has also brought substantial benefits in people’s quality of life. From helping us file taxes and sign up for healthcare benefits, to enabling us to communicate and engage with mass media, the internet is an increasingly central force in our social, economic, and civic life.
It is hardly necessary to state the value of a stable internet connection in 2017. Differing levels of access do not merely reflect pre-existing inequalities in material wealth—they reinforce them. If you can’t use the internet to access a government service, fill out a job application, or email your grandkids—or if you need to take hours out of your day to go to a library to perform these tasks on a public computer—it’s going to set you even further behind people with easy access. Marston tells me that, as a publicly owned company, the EPB has an imperative to address economic inequality as it manifests through this divide. “As a municipal utility,” he says, “our mission is to enhance the quality of life and local economy for our entire community.” To this end, the EPB offers a subsidized unlimited data plan called Netbridge to families on the National School Lunch Program. Since the “smart grid” connects to every home in Chattanooga, the plan is available throughout the utility’s entire service area, which Marston says has “dramatically raised the bar on people’s expectations of what the internet should be.” This point is significant. In other parts of the country, the “digital divide”—a structural gap between those who have ready access to the internet and those who do not—is fueled in part by a practice known as “digital redlining,” where internet service providers (ISPs) refuse to invest in low-income areas because of their poor capital returns.
But it’s the wrong starting point. They refuse to acknowledge that this is a political problem—or at least that it demands a political solution. Nothing will happen if we don’t demand building codes for these sorts of systems. And those codes have to be enforced from outside, in the form of regulation. Solving Facebook through Facebook is futile.
E. Glen Weyl (GW): We spend a huge amount of time talking to each other over Skype. But one of the first things that I learned from you was how foolish it is to be satisfied with Skype, to assume that it offers any kind of substitute for being there in person with someone. I wonder if you could start us off by talking about technology’s failures, and why it’s important for us to be aware of them.
But although it might maximize profits, this way of thinking about online harassment is almost entirely unable to address the harm that harassment causes. It assumes that the problem is individual pieces of harmful content that must be moderated—not people and their relationships. As a result, content moderation fails to serve the needs of those who are harmed online or to change the conditions that make such harm possible.
Once the problem of online harm is framed as content moderation, it is already a lost cause for victims. Inevitably, platforms claim that the sheer amount of content makes it impossible to monitor. This is true, but it conveniently leaves out the fact that every single decision made by platforms prioritizes scale, and platforms generally avoid taking actions that might reduce user engagement. At the same time, as the scholar Sarah T. Roberts has detailed, they strive to minimize costs, especially for things like human moderators.
Failing in Silicon Valley is often a prerogative of the young—or, in Kalanick’s case, the adolescent-acting. And people don’t talk about how much less sustainable it has become to be young in the Valley. One VC who back in the early aughts grew a tiny startup into an $80 million company with more than 250 employees reminisced to me about the early days when “we just lived with our parents in Toronto.” “Our labor force was ourselves and we paid for the servers by credit card,” he continued. Then he reflected a moment. “That’s no longer possible, which I guess is what makes us necessary.” Another change he has noticed: a lot of big funds have moved towards investing earlier in the life of a company. Where once a founder may have come to them looking for a Series A round, now they are coming for angel funding. This means that any falling out of favor with investors will be extremely public—“If Sequoia offered you funding and suddenly isn’t around for the next round, I ask myself: what do they know that I don’t?” Silicon Valley’s tolerance for failure is partly predicated on a privacy that is starting to dissipate.
But the thing about failing is that it seems to carry opposite meanings depending on who does it. If a traditional brick-and-mortar business hemorrhages money as unregulated digital competition moves in, then that’s just a sign that brick-and-mortar deserves to die. By contrast, if a disruptive New Economy startup loses money by the billions, it’s a sign of how revolutionary and bold they are.
We shouldn’t have lost the notion that a commercially driven media ecosystem is unlikely to foster the kind of rich analysis and deliberation that we need as an advanced technological society and as a democratic republic. The world is so complex that we actually need better forms of analysis and better forums for deliberation than the ones we inherited from the 20th century. And instead of building those, we trusted Facebook and Google. Google said, “Hey we’re going to build the library in the future! Let’s defund the libraries of the present!” Facebook said, “We will build a public square that will liberate the world and spread democracy!” And everyone went, “Great!” The very fact that these corporate leaders believe so deeply in their ability to improve our lives should have set off alarm bells. It’s not that they’re lying. It’s that they actually believe it.
Meanwhile, big tech firms have become so big that they are exempted from the logic of the market to some degree. One of the perverse things about both Facebook and Google is that because their money came so early and so easily, they think of themselves as market actors that are liberated from the market. Venture capital has a distorting power. It encourages inefficiency in the distribution of resources, it encourages bad actors, and it encourages foolish ideas. So much money chasing so many bad ideas gets abused and wasted by so many bad people.
But Kock was right, supply chains are murky—just in very specific ways. We’ve chosen scale, and the conceptual apparatus to manage it, at the expense of finer-grained knowledge that could make a more just and equitable arrangement possible. When a company like Santa Monica Seafood pleads ignorance of the labor and environmental abuses that plague its supply chains, I find myself inclined to believe it. It’s entirely possible to have an astoundingly effective supply chain while also knowing very little about it. Not only is it possible: it may be the enabling condition of capitalism at a global scale.
It’s not as though these decentralized networks are inalterable facts of life. They look the way they do because we built them that way. It reminded me of something the anthropologist Anna Tsing has observed about Walmart. Tsing points out that Walmart demands perfect control over certain aspects of its supply chain, like price and delivery times, while at the same time refusing knowledge about other aspects, like labor practices and networks of subcontractors. Tsing wasn’t writing about data, but her point seems to apply just as well to the architecture of SAP’s supply-chain module: shaped as it is by business priorities, the software simply cannot absorb information about labor practices too far down the chain.
So we all play to our strengths. I consider Stop LAPD Spying a mentor in this work, especially in resisting police and surveillance systems. For this project, Stop LAPD Spying and Los Angeles Community Action Network (LA CAN) put a lot of emphasis on unhoused populations and how Skid Row is a heavily targeted community that the LAPD uses to innovate new surveillance systems.  In Charlotte, Tamika does work around citizens returning from incarceration. Folks returning from incarceration constantly have to report back to the system. It's like the system is constantly waiting for you to fail. Tamika also brings an important perspective as a gender-nonconforming organizer: they understand how resistance to being defined under the dominate gender binary makes people targets for intensified surveillance.  How have you been able to make use of that experience and research? We’ve collectively produced materials like our Digital Defense Playbook, which is an educational resource and activity guide about data, surveillance, and safety that we created last year based on our three years of community research. We’ve taken that work to different neighborhood institutions, as well as academia. We’ve worked with data scientists, who are often disconnected from the real people represented by the statistics they analyze.
We're all learning from each other and staying in constant communication about what's happening in our different cities, and hearing the same themes. Things like, “The one mistake that I've made in my life is now the thing that's limiting how I'm able to survive.” Or, “I'm feeling heavily surveilled because I'm trans,” or “I'm feeling super targeted because my credit isn't good,” or “I've just reentered society from incarceration and I feel like every move that I make is being scrutinized and monitored.” Across all the cities, community members wanted to be seen for something more than a mistake they made, or something more than their data. And they didn't want to be pursued and tracked and monitored.
The industry’s business model involves using troves of personal and population-level data to bet on how long individuals will live, and charge them accordingly. Underwriters collect data from applicants on things like personal and family medical history, occupation, lifestyle and hobbies, and then gather data on the backend from driving records, criminal records, prescription history, credit reports, and more. Underwriters feed a portion of this data into an algorithm, which helps them determine a risk score that measures how likely an applicant is to die during the term of the policy.  You pay for the risk you represent, no matter its source. This principle is known in insurance as “actuarial fairness.” Those who are young and healthy can get more financial protection for less money, while the older or more at risk of death you are deemed to be, the more expensive and less accessible that protection becomes. Maybe you’re a wealthy suburbanite who likes racing ATVs on the weekend, or a formerly incarcerated person who now works on a fracking rig, or a single mother with asthma living near a superfund site—all of this helps determine your risk score and how much insurance you can access. Life insurance, in short, is a system built on discrimination.  A reliable test for how one’s cells are aging could be a transformative new tool for the industry. It’s illegal to explicitly discriminate against applicants on the basis of race, gender, or other categories protected by the law—life insurance companies used to deny coverage to Black people and women, but allowed masters to insure their slaves—but epigenetic underwriting could make such discrimination possible through a new form of algorithmic redlining. That’s because epigenetic tests can measure chronic environmental and psychological stresses that often map on to race, class, and gender. For example, one study used one of Horvath’s epigenetic clocks to examine blood samples from 392 Black adults and found that high lifetime stress correlated with accelerated epigenetic aging. Similarly, populations living in highly polluted areas, oftentimes the poor, will have distinctive methylation patterns that could allow the life insurance industry to further discriminate based on class.
There’s a second way that epigenetic tests are likely to be used in the life insurance industry, one that carries a related risk. Epigenetics has been heralded within both academia and the media as the answer to the longstanding nature-versus-nurture debate: proof that our surroundings and behaviors influence us on a molecular level. Using this logic, the life insurance industry—along with companies offering consumer epigenetic tests—is likely to use epigenetic clocks to offer personalized health insights and behavior-modification programs to its customers. High epigenetic age? Trying eating more kale! This could also lead to discount programs that reward policyholders who successfully lower their epigenetic age. Similar discount schemes are commonplace in auto insurance, where companies like TrueMotion allow policyholders to download an app that tracks their actions behind the wheel and gives discounts or makes price hikes based on safe or risky driving.
For instance? The two things I think are most interesting are coins for payments which preserve privacy, and Handshake, a coin that creates a separate DNS namespace for websites. On the privacy-coins, I think we’re on the verge of an explosive battle between Facebook, who have said they’re doing a coin that will probably be launched in Whatsapp, Telegram, who raised over one billion dollars to create a coin that can be used within their messaging app, and Signal, who are working to integrate a privacy-preserving coin called Mobilecoin. The messaging apps are a natural place for payments to occur, something which I became religiously convinced of after living in China for a while and using WeChat for absolutely everything. But WeChat, Paypal, Venmo, etc, are basically panoptic surveillance systems for governments, and if that’s the vision of digital payments we end up with, we’re in a pretty horrifying 1984-esque society that seems hard to get out of. Of the three messager-coins, I’m probably most excited about Mobilecoin/Signal because of Signal’s impeccable reputation for serious privacy and security--a coin by Facebook has serious trust issues to overcome. Handshake, the other project I’m really excited about, uses a blockchain to auction off and then keep track of a bunch of new domain names. So for example you could bid on “.mango” and then have xiaowei.mango as your blog. Besides the obvious fun of having a bunch of new names to play around with, and not having to wait on a slow-moving bureaucracy to approve new TLDs, Handshake domains have the added advantage of being basically unseizable, so you can use them to host whatever content you want and not be worried that the government or a registrar will decide to stop you. So when you started seeing more interesting non-Bitcoin projects, what did you do next? I formed a startup with with my friend Ben Yu to do a live-streaming platform using blockchain for micropayments. The inspiration for this was that when I was in college in China, I saw these live-streaming platforms where people were making huge amounts of money, but the platform took a 50 percent cut.
I asked some friends in Beijing if they could introduce me to any Chinese investors that were interested in investing in blockchain ideas. At the time there were not very many. One friend mentioned that he knew the most famous Bitcoin blogger in China, Xiao Lai Li, and offered to introduce me to him; if he liked my project he might tell his investor friends about it. I tried to set up a meeting with this guy. He was not around but introduced me to his partner, Lao Mao. I ended up meeting with Lao Mao in this beautiful coworking space, and he showed up in a new Porsche. In my mind I was like, “Huh, being a blogger is really lucrative in China.” We ended up getting along. He was really smart, and we had the same outlook on the future of the space. At the end of it, he said that he was interested in investing in what we were doing. I was surprised because I didn’t know they invested. When I asked about it, he gave me this weird look and said, "Yeah, we run this fund called INBlockchain. It's a fairly large fund in China." When I got home and read up on them, I found out they had the biggest Bitcoin holdings of anyone in China, maybe the world. I'm glad I went into the meeting feeling low-stress, thinking I was just going to talk to a blogger—it probably made the pitch a lot better.
This is memorably illustrated in the 1983 cult classic WarGames. A teenager played by Matthew Broderick hacks into a NORAD supercomputer and sets off a simulation that almost triggers World War III. The problem is that the computer can’t tell the difference between playing a thermonuclear war and actually fighting one, between a “bite” and a bite.
But nuclear annihilation wasn’t the only game that computers could play. In 1962, a programmer at MIT came up with Spacewar!, a multiplayer space combat game that became a huge hit in computing centers around the country. Spacewar! turned the unwieldy, intimidating machines of mid-century computing into instruments of play and pleasure. It made computers fun.
The gendered attributes switch as you travel to the back of the stack. At the far end, developers (more often “engineers”) are imagined to be relentlessly logical, asocial sci-fi enthusiasts; bearded geniuses in the Woz tradition. Occupations like devops and network administration are “tied to this old-school idea of your crusty neckbeard dude, sitting in his basement, who hasn’t showered in a week,” says Jillian Foley, a former full-stack developer who’s now earning her doctorate in history. “Which is totally unfair! But that’s where my brain goes.” The Matriarchy We Lost The brilliant but unkempt genius is a familiar figure in the history of computing—familiar, but not immutable. Computing was originally the province of women, a fact innumerable articles and books have pointed out but which still seems to surprise everyone every time it’s “revealed.” The bearded savant of computer science lore was the result of the field’s professionalization and increasing prestige, according to the computing historian Nathan Ensmenger.
“If you’re worried about your professional status, one way to police gender boundaries is through educational credentials,” says Ensmenger. “The other way, though, is genius. And that’s something I think nerd culture does really well. It’s a way of defining your value and uniqueness in a field in which the relationship between credentials and ability is kind of fuzzy.” And “genius,” of course, is a strongly male-gendered attribute—just look at teaching evaluations.
To find out how robust this relationship was, Horvath began gathering publicly available datasets that included information about subjects’ age and DNA methylation patterns. By 2012, he had gathered eighty-two datasets with eight thousand samples in them. The data came from a wide range of people of different ages, and from different parts of the body: cord blood from newborns in various parts of the world; brain, stomach, lung, liver, breast, and uterine tissue; sperm; immortalized B cells from people with a rare genetic disease. He even gathered data on chimpanzees.  Horvath used about half of these datasets to train an algorithm to look for associations between DNA methylation and age. Then he tested the algorithm on the other half of the datasets. What he found was so remarkable that several academic journals rejected his results out of hand. The pattern of methylation at just 353 of the three billion or so pairs of DNA nucleotides in the human genome corresponded to a person’s age with 96 percent accuracy, an unprecedented degree of correlation between a biomarker and the process of aging. After Horvath finally convinced a journal to publish his results, other researchers began to replicate them.  Horvath referred to his successful algorithm as an “epigenetic clock”: insert some body tissue, and it would spit out your epigenetic age. He hypothesized that people with a higher epigenetic age than chronological age (the one we mark with birthdays and greeting cards) might be at a higher risk of early death. With a group of fellow researchers, he set out to develop a new algorithm that would analyze methylation patterns at DNA sites associated with mortality and compare them to the age of death of subjects in large longitudinal studies. It turned out that the 5 percent of people with the highest epigenetic age relative to their chronological age were twice as likely to die prematurely as an average person of their chronological age. Horvath and his colleagues called this new epigenetic clock GrimAge, after the grim reaper, because it seemed to have the disturbing potential to tell if you were headed for a premature death.
This was a scientific breakthrough with an uncommonly wide range of applications, some of them quite unsettling. Almost as soon as Horvath began publishing the research on his clocks, people began to realize their potential to shape domains from law enforcement to healthcare. Scientists started to investigate whether the algorithms could be used to help solve crimes by using genetic material left at crime scenes—blood, hair, skin cells, bodily fluids—to determine the age of unknown victims or perpetrators. Online, a half-dozen or more companies sprang up selling versions of Horvath’s clocks for around $200, promising prophetic insights to largely affluent customers concerned with prolonging their youth. These companies are like 23andMe, but instead of purporting to reveal your ancestral past, they claim to divine, and allow you to intervene in, your biological future.
An Industry Built on Paper Shipping may be unusually dependent on paper, but it’s not for any lack of data. And there are lots of different kinds, including data about ship locations, ships themselves, what ships are carrying, and the buyers and sellers of ship cargo. Ship locations are relatively easy to come by: anyone with an internet connection can monitor the movement of large ships. The UN’s International Maritime Organization requires large vessels to transmit their location information using a VHF-radio protocol called the automated identification system, or AIS. A Google search will yield numerous portals where you can view ship movements in near-real time.
Data about what’s happening on ships is increasingly sophisticated, but it’s usually proprietary, held mainly by the ship operators. The newest shipping vessels are covered with sensors of all kinds, monitoring everything from cargo temperatures to fire hazards to hull conditions. On the bow of a ship’s mast, an anemometer might measure wind speed and direction. On the bottom of the hull, an echosounder can detect the depth of the seabed. In the engine room, meters measure the flow of fuel, monitoring the engines’ efficiency and condition. Increasingly, the data is reported back to shore in near real-time: 5G technology and low-Earth orbit satellites have increased the practicability of worldwide connectivity. This is important, since technologists in the shipping industry envision a near future in which one captain controls a fleet of crewless ships from an onshore computer. But detailed as this shipboard information is, it’s generally not available to anyone outside the ship operators.
In the writer Kodwo Eshun’s description, the machine itself shares authorship of “Acid Trax.” “Acid is an accident,” he suggests, “in which the TB-303 bass synthesizer uses Phuture to reproduce itself, to multiply the dimensions of electronic sound, to open up a nomadology of texturhythms, rhythmelodies.” The story of acid house begins in a factory.
Transistor Rhythm While house music’s sonic textures drew from both African-American music’s corporeal funk and European synth pop’s electronic sheen, it only crystallized as a genre and production practice after some help from Japanese technology. The Roland Corporation, founded by Ikutaro Kakehashi in Osaka in 1972, unknowingly provided the final ingredients for the futuristic dance music that would emerge in America over the next few decades. In 1980, Roland’s TR-808 Rhythm Composer, the “TR” designating “Transistor Rhythm,” became one of the first synthesized drum machines to become available to consumers.
Safiya: I don’t think tech companies are equipped to self-regulate any more than the fossil fuel industry. Certainly, our hyperinvestment in digital technologies has profound social, political, and environmental consequences. We’re only beginning to scratch the surface in understanding these consequences, and what it means to be building these huge communications infrastructures.
Sarah: Policymakers like to say, “The technology is too complicated, so I can’t understand it. And if I can’t understand it, I can’t regulate it.” The industry encourages that impression. But in fact, when states push hard enough on industry, industry finds a solution. It’s not so complicated after all.
The Sonic Guide was revolutionary — except no one bought it. Kay and Smith went on to invent many other assistive devices, including the Viewscan, the first portable video magnifier; the first portable talking word processor; and the BrailleNote reader, all of which live on in different forms today. The company that Smith founded, today known as HumanWare, became the Apple of blind technology.  The field has been moving rapidly since the release of the Sonic Guide. Today we have digital magnification glasses like eSight; powerful video magnifiers from HumanWare; artificially intelligent headsets like OrCam, similar to Seeing AI; and Kay and Smith-inspired sonar-detecting bracelets like the Sunu Band. They’re all quite expensive, however, which is why the App Store’s endless stream of affordable and often free assistive apps are so important. With text recognition apps like KNFB Reader, money-identification apps like LookTel, GPS and location-orienting apps like BlindSquare or Microsoft’s Soundscape, costly physical devices are losing their monopoly on assistive technology.  Yet these apps aren’t always particularly sophisticated. Like Be My Eyes, they often involve a video link to a remote worker. They frequently rely on human rather than artificial intelligence. As such, they belong to a broader trend of on-demand labor apps like Instacart, DoorDash, and Uber. The support workers and personal assistants that vision-impaired people have always depended on are still there. But now they live on the other side of our phones, as gig workers performing platform-mediated labor.
Tethered, Together I’m with my friends in Old Spitalfields Market in East London. It’s trendy, sprawling, and filled with the scents of cross-continent street food. The market is bustling and we’re ravenous, but we can’t find the place we’re looking for. Finally I say, “Should I just use my phone?” “Let’s just use our eyes,” my friend says to me, before biting her tongue.
Peluso’s case study focused on Kalimantan in Indonesia, where the Food and Agriculture Organization of the UN had found that Japanese, Philippine, US, and European timber concessions had been allocated 57.9 million hectares of forest by the government—despite the fact that only 43.3 million hectares were classified for timber production. The governmental forest-planning map that guided the awarding of concessions also plainly ignored indigenous community boundaries.
This imbalance between production, conservation, and community remains just as exaggerated today. Overlapping permits have led to a situation in which “130 percent of the total area of West Kalimantan is now covered by concessions for mining, palm oil, logging, and pulp and paper plantations,” according to a 2017 study by Indonesian scholar-activist Irendra “Radja” Radjawali and his colleagues.
In the winter of 2018, I started to hear rumors about miraculous clinical developments taking place in my hometown of Philadelphia involving growing sheep fetuses in plastic bags. “Bio-bags” was what they were called. Eager to learn more, I got in touch with the renowned pediatrician Dr. Thomas Shaffer. Shaffer wasn’t involved with the bio-bag experiments, but he knows a lot about fetuses. He is a pioneer of “liquid breathing” techniques designed to help prematurely born babies survive.  “As we’ve known for some time, fetuses are not little adults,” he told me over the phone, in early 2019. “Have you read Water Babies by Charles Kingsley? Have you watched James Cameron’s The Abyss? Fetuses’ survival comes down to what a deep diver needs to do: fill his lungs with water.” I nodded, hoping for Shaffer to say explicitly what I (a creature of the humanities) had not yet felt able to argue: that fetal humans are a distinct, aquatic species.  Intrauterine space is wet. The amniotic sac that holds the fetus contains up to about a quart of oxygen-rich fluid that is mostly composed of urine. Exposure to the air stops fetal lungs from completing their development. Before we turn into “land babies” (that is to say… babies), we breathe amniotic liquor. Though we have no gills, we move our tiny diaphragms and intercostal muscles in a dedicated rehearsal of future gaseous breathing, and we do not drown.
Some escapologists and deep-water divers try to slow their heart rates by “remembering” this time before fear — this state of non-antagonism towards water — to calm themselves. These trance-like attempts at becoming-amphibian are not, I feel, what the conservative Christian organization Focus on the Family had in mind when it gazed, via ultrasound, into the abdomen of Abby Johnson — the Planned Parenthood volunteer turned abortion opponent whose memoir Unplanned made her into a heroine of the anti-choice movement, and is now the basis for a movie with the same title — and declared the contents “human.”  Dr. Thomas Shaffer did not explicitly say whether or not he agrees with the anti-abortion political thrust of this kind of humanization of the fetus. Yet, as he himself explained to me, breakthroughs in the field of neonatology have frequently been predicated precisely on the recognition that fetuses are simultaneously part of land-based humans, and nothing like land-based humans. They are another species. The wetness of the womb — or, rather, the wetness of the entity some academics call “the motherfetus” — is both moat and membrane, bath and barrier, bridge and buffer: it is both what makes gestator and gestatee dissoluble, and what makes them indissoluble. Wetness is, in this sense, intrinsically Janus-faced and inhuman: “we” cannot live there. Liquid is villainously difficult to control, contain, and put to work within “wet tech.” It is lethal to the gestator when it floods her from the inside in, for instance, a hemorrhage. Water is just as much a killer as a nourisher, just as much a threat to life as a source of life.  Born in a Ziploc I started contacting neonatal scientists in Philadelphia because I wanted to talk about what water does in the context of a pregnancy, and to what extent the process might be automated through something like a “bio-bag.” Eventually, I managed to find a researcher directly involved in the bio-bag experiments: a surgeon at the Children’s Hospital of Philadelphia named Dr. Emily Partridge. I also learned something extraordinary: while the bio-bags currently involve sheep fetuses, the researchers plan to begin trials with human fetuses next year.
Storytelling is not only for founders. It is a skill that everyone at a startup should cultivate, because the vast majority of startups will fail. And when you suddenly learn that you are out of work and all that equity you pulled all those all-nighters to earn is worthless, no job interviewer will want to hear that you got screwed over. They want to hear how you grew.
Even if you do get hired at one of the biggest tech companies, the risk of failure does not go away. On the contrary, a funny thing happens: the more successful the company becomes, the higher the bar for any internal project to be considered successful. If you build a startup that gets ten thousand users, you could sell that startup for seven figures. But once you work at a platform with billions of users, if you design a feature that only a few million use, your bosses will shake their heads and shut it down.
Saturday, February 15, 2003 was an unprecedented day in world history. From sunrise to sunset, an estimated thirty million people in nearly 800 cities across the globe hit the streets in a coordinated effort to stop the US-led war on Iraq. F15, as the anti-war protest became known, remains the largest global demonstration in history. Like all protests, F15 was orchestrated by a movement. And this movement did something unprecedented for its time: it used the internet to coordinate a coalition of hundreds of popular organizations across the world. In fact, the real mouthpiece of F15 was a grassroots network called Indymedia, composed of around 200 local “independent media centers” (IMCs) that published citizen journalism on the web. Indymedia had launched three years earlier in Seattle during the protests against the World Trade Organization in 1999. Within a few years, it grew to become a global collection of community-run newsrooms devoted to covering social and political issues from a left-wing perspective. Indymedia maintained a global website at indymedia.org, which aggregated content from the many place-based Indymedia sites that were run by the local IMCs, such as indybay.org, based in the San Francisco Bay Area. The project was volunteer-run and committed to anti-corporate journalism. And, prefiguring the explosion of social media, Indymedia emphasized user-generated storytelling, as captured by the slogan: Don’t hate the media, be the media.
On the day of the F15 protests, the headline on the global Indymedia website read: “Millions March Worldwide to Denounce Bush’s War Plan.” The article listed more than eighty protests worldwide, with links to the associated stories from the local Indymedia sites. Following the initial story, there was a regional roundup, which went into greater depth about the protests in different parts of the world, alongside a collage with over 200 photos from across the globe. “Indymedia, with reports for all of the biggest demonstrations and many of the smallest, wove hundreds of separate actions into a single story,” wrote New York City Indymedia activists Josh Breitbart and Mike Burke in a piece for Clamor magazine. “As popular uprisings from around the world begin to coordinate their actions, [I]ndymedia is proving to be an essential tool for imagining this new community.” As Breitbart and Burke suggest, F15 would not have been possible without Indymedia. It is not a stretch to say that Indymedia gave form to the global anti-war movement in 2003, just as it gave form to the anti-globalization movement a few years earlier.
Still, something was lost with the rise of personal computing. One clue comes from a 1978 article by the leftist tech activist group Boston Computer Collective. Writing in the radical science magazine Science for the People, they took aim at the hollowness of the personal computing revolution, reviewing Ted Nelson’s Computer Lib/Dream Machines, a widely circulated book celebrating that revolution.
Making computers more widespread would not “pave the way towards a just society,” they argued. Smaller machines would not mean more personal power and less corporate control. “We cannot accept Nelson’s implication that a small computer must come from a relatively small manufacturer,” they wrote, “or that this supposedly small corporation will therefore hold public interest over profits.” Nor could they accept the idea that “hypermedia” and “individualized” instruction would improve the conditions of education; rather, it would likely lead to more “individualized control and standardization.” In subsequent decades, their critiques seem to have proven correct. Decentralization and personalization—watchwords of the personal computing and internet era—did not automatically serve as forces for liberation. Rather they were something of a Trojan horse: a way of making computer technology so intimate that it brought profit-making and corporate power into every aspect of our lives.
Sean Parker is also a billionaire. Do you see rank-and-file tech workers expressing doubts about what they’re building as well? When you’re an engineer, you’re constantly being told to do things that are clearly not good for the user. If you’re building any kind of app or platform that makes its money from advertising, you are trying to maximize “time spent”—how long a user spends with your product. Time spent is considered a proxy for value delivered to the user, because if your product wasn’t useful, the user wouldn’t be using it.
Here’s how it typically works. An order comes down from on high: the board says to increase revenue. What’s the best way the management team knows to increase revenue? To increase time spent. So they issue the order, which gets percolated down the tree, and now everyone is working on increasing time spent. This means making the product more addictive, more absorbing, more obtrusive. And it works: the user starts spending more time with the product.
In terms of revenue, the game development industry makes more than twice what Hollywood does any given year. It’s been ahead for almost a decade now. But that's not going to the creators—not the programmers, not the QA folks, not the artists. Game development is centered in metropolitan areas like San Francisco, London, Tokyo, Paris. These are places where the rents are going up, but the pay for game workers isn’t. So more and more, we're seeing people struggling to make it in this industry.
How have the jobs changed within the gaming industry over time? Similar to other parts of the tech sector, the game development industry is moving away from hiring people full-time and towards a gig model. It’s becoming commonplace to hire contractors and freelancers, instead of hiring full-time visual artists, sound designers, or writers for games. Or having fans submit speculative work to be used in games, paying them only a pittance.
Vulgar content that goes viral on platforms like Kwai threatens to undermine suzhi, then, just as counterfeit goods that circulate through platforms like PDD threaten to undermine pinzhi. In both cases, however, a top-down order isn’t enough for the countryside to upgrade its lifestyle. Counterfeit goods, and “low-quality” content, continue to flow.
“Other men can go home and watch their wives. I watch my phone,” Zhu Hongfu, a laborer from Zhejiang province, baldly admits to me. A bachelor now in his fifties, Zhu is among the three million “surplus men” in China, many from rural or working-class backgrounds who, as a result of China’s skewed gender ratio and their own financial position, may never find a partner to marry. Besides using WeChat to keep up with friends, he says, his phone serves just two purposes: playing an online card game called Double Buckle, and browsing videos of attractive women.
In 2006, the Susquehanna River rose through the rooms, easily ripping at their seams and fixtures, a flowing push against the old giant. Named “Oyster River” by the Lenape people, the Susquehanna is one of the country’s most polluted. It came to know nuclear meltdown from Three Mile Island, and it knew coal and fertilizer and feces, before it came into the club. Then the teenagers charged against the broken surfaces.  In the locker room there are still four inches of green soap in the dispenser but the porcelain toilets are smashed or filled with shit. The mirrors have upside-down crosses and         666         PUSSY         SATAN  With language learned from horror movies, the trespassers hack away at Big Blue’s shadow. A fuck you, from the town to IBM. A mutual fuck you, from IBM to the town.
The Plume It started beneath a swath of train tracks and poured gravel beside Building 18, where circuit boards were made. It formed from repeated spills of volatile organic compounds used in the degreasing and cleaning of microchips. In 1978, 1.75 million gallons of wastewater were released. That same year, 4,100 gallons of liquid solvents, including trichloroethylene (TCE) and trichloroacetic acid (TCA), were released. In 1980, IBM contacted the Environmental Protection Agency (EPA) to report that contaminants had begun to form a pool beneath Building 18. This is what became known as the plume.  It is hard for people to believe in invisible things, but the plume began to manifest. Its vapors started to travel underground. It spread to encompass 300 acres of the town: churches, movie theaters, grocery stores, and 480 homes. It was no longer invisible when people began to get sick.
Unregulated Algorithms Even as the use of facial recognition software increases in law enforcement agencies across the country, the deeper analysis that experts are demanding isn’t happening. Law enforcement agencies often don’t review their software to check for baked-in racial bias—and there aren’t laws or regulations forcing them to. In some cases, like Lynch’s, law enforcement agencies are even obscuring the fact that they’re using such software. To take another example, in their Perpetual Line-Up study Georgetown researchers found that the Pinellas County Sheriff’s Office in Florida runs 8,000 monthly facial recognition searches, but the county public defender’s office said that police have not disclosed the use of the technology in Brady disclosure—evidence that, if favorable to the defense, must be provided to them by prosecutors.
Garvie said she is confident that police are using facial recognition software more than they let on, which she referred to as “evidence laundering.” This is problematic because it obscures just how much of a role facial recognition software plays in law enforcement. Both legal advocates and facial recognition software companies themselves say that the technology should only supply a portion of the case—not evidence that can lead to an arrest.
Jaron Lanier (JL): Accepting any particular technology as being a given, as an inevitability, as beyond criticism, as just a part of the natural environment, means that technology has failed. Such a technology has failed to foster human engagement. It has failed to be integrated into human society in a constructive way.
To criticize technology is to love it. Technology is people getting better at doing things in the world—there’s really no need for a more elaborate definition. And so to criticize the technology—whether it’s Skype, or the way our cities are laid out or the way the English language is constructed—is not to hate technology. Rather, it’s to love technology by engaging with it.
That really resonates with how the press covers someone like Elon Musk. Exactly: Elon Musk is the classic example. And I actually really admire Elon Musk. I should say that one of my principles for working on Silicon Valley has been to take people at their word. The first news story I ever did when I was a journalist was about a guy who bilked widows out of their houses. My job was to figure out how he did it. So I spent all afternoon with him. He was a totally charming man. He didn’t lie to me. He told me exactly how he did it. I reported the story and I got two kinds of letters. One kind of letter said, “You finally busted the prick. You nailed him.” The other kind of letter was written by his friends. I was sure they were going to hate me. But they said, “You finally showed the world what a great businessman he is.” As we try to figure out Silicon Valley, I think it’s important to pull back a bit and try to see it from both sides. That can be tough if you have stakes in the debate. But it also gives you more room to see the whole world.
I also wonder whether one of the reasons that tech CEOs dominate the media narrative is that the ubiquity of nondisclosure agreements (NDAs) make it very hard for rank-and-file tech workers to have a public voice. One of the ironies of the Valley is that the NDAs do prevent the transmission of stories from the Valley to Washington, New York, Boston, and elsewhere. But within the Valley, everybody knows everybody, more or less, so the NDA doesn’t apply.
These are striking, since, once again, they reveal that America remains a leader of sorts. When compared to those of the other top fifteen highest civilian-gun-owning nations, the American military arsenal includes over twice as many firearms as its second closest competitor (France), three times as many as its third (Sweden), four times as many as the fourth (Serbia again), and seven times as many as the fifth (Iraq). When it comes to police stockpiles, America once again leads the pack: American police have 1.15 million guns, a number followed only by Iraq (690,000), France (218,000), Yemen (210,000), and Saudi Arabia (90,000).
How to make sense of this—what commonalities shape this distribution? One thing that leaps out, causing considerable offense to chauvinistic American sensibilities, is how, when it comes to being saturated with both civilian guns as well as guns in the hands of its military and police, America far exceeds variously repressive or chaotic Middle Eastern states (Saudi Arabia, Yemen, Iraq) and nations which in recent memory have seen brutal civil wars or other violence (Serbia, Cyprus).
Joan left the hospital disoriented and disheartened. At nineteen, she felt like her life might be over before it started, forever marked by the stigma of having been involuntarily committed. Fortunately, a work-placement program run by the hospital helped her get a job, and though she couldn’t return home, her Aunt Maud and Uncle Ted took her in.
The road ahead would not be easy. Always the class clown at school, Joan knew it was better to hide her insecurities and weaknesses than to ask for help. Joan could not read easily or write well, nor could she figure out numbers and arithmetic. She was smart but extremely dyslexic, at a time when dyslexia was mostly unknown—people called her stupid instead, if they had the chance. So Joan made sure they never did. Hiding her disability as she started work, Joan pulled herself up by her bootstraps, asking help from no one.
In many places, a bootcamp is still seen as having a bit more prestige. But if people don’t know anybody in the tech sector, then they go to what’s available. And nobody can beat the advertising and name recognition of the for-profit colleges. So the differences are mostly about how much status people are bringing to the problem—and that affects how are they going to get trained for the tech labor market.
The tech industry has problems recruiting and retaining women and people of color. One of the things that bootcamps have done is brought a more diverse set of people through the door. Do you feel that these for-profit coding bootcamps could actually help alleviate inequality—or are they deepening it? What I suspect is happening is the same thing that we see happen in traditional higher education at elite universities. Schools like Harvard, Yale, and Princeton have historically had diversity problems. To get better, they cherry-pick the best students from minority groups, because they have the prestige to do so.
I was reminded of those physicists in December 2020, in the wake of Google’s high-profile termination of AI ethics scholar Timnit Gebru. Her firing was the final step in management’s systematic silencing of her scholarship, which came to a head over a paper she coauthored called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” The paper offers a damning critique of the energy demands and ecological costs of the sorts of large language models that are core to Google’s business, as well as how those models reproduce white supremacy—and codify dominant languages as default while expediting the erasure of marginalized languages. Thousands of Google workers, as well as supporters throughout academia, industry, and civil society, rapidly mobilized in defense of Gebru, writing an open letter to Google executives that demanded both her reinstatement and an apology for the egregious treatment she received.  Like the Franck Report before it, however, this open letter represented a grave misunderstanding of the politics of AI and was in no way commensurate with the threat we face. The technologies being developed at companies like Google present major stakes for all of humanity, just as the invention of nuclear weapons did in the previous century. They are strengthening concentrations of power, deepening existing hierarchies, and accelerating the ecological crisis. More specifically, big tech corporations are extracting labor, ideas, and branding from Black people, and then disposing of them with impunity—whether it’s scholars like Gebru or Amazon workers in Bessemer, Alabama organizing against plantation conditions.  Racial capitalism’s roadmap for innovation is predicated on profound extraction. AI is central to this process. The next flashpoint over AI is inevitable—but our failure to respond adequately is not. Will we continue to write letters appealing to the conscience of corporations or the state? Or will we build a mass movement? As Audre Lorde said, we cannot dismantle the master’s house with the master’s tools. We cannot stop Google from being evil by uncritically relying on the G Suite tools it developed. We cannot uncritically champion the most popular among us, as if social capital will resolve the colonial entanglements reproduced in much of what passes for research in the field of technology studies. An atlas ain’t no Green Book, and we cannot afford to pretend otherwise. What we can do is to build a better analysis of the context of racial capitalism in which extractive technologies are developed. We can share knowledge about the ways in which such technologies can be refused or have their harms mitigated. We can forge solidarities among workers, tenants, and technologists to help them organize for different futures. We can light alternate beacons.
2/ Dismantling racial capitalism and displacing the carceral systems on which it relies requires an understanding of how technology produces “new modes of state surveillance and control,” Dorothy Roberts argues. Part of the challenge is that these new geographies of policing, regulation, and management are largely invisible. We experience the immediacy of our Amazon package being delivered without seeing the exploitative labor conditions decreasing the distance between order and arrival. This is not a function of insufficient effort—it’s an indication of how successful big tech corporations have been in concealing the sources of their power. In his essay in this issue, Julian Posada provides a detailed account of Venezeulans performing the tedious, low-paid labor of data labeling on which AI depends—labor that is hidden beneath Silicon Valley’s minimalist user interfaces and promises of automation. The circuits of racialized capital link us ever more closely together even as the pandemic has deepened our sense of alienation.  Understanding how tech has reorganized labor, and developing a strategy to break free, is not easy. It cannot be done with the narrow technical training that produces computer science PhDs—the recent appending of ethics courses notwithstanding. It requires an interdisciplinary analysis in partnership with impacted people who are on the forefront of digital experimentation. There is no way around doing this work.
Since the internet became mainstream in the 1990s, we’ve been told it would take us to utopia. The digital economy would transcend analog laws and limits, and grow forever on the fuel of pure thought. The digital polity would make us more engaged, and produce more transparent and responsive governments. As individuals, we could expect our digital devices and platforms to make us happier and healthier, more open and connected.
For decades, these promises seemed plausible. At least, most of the media thought so, as did most of our political class and the general public. In the past year, however, the consensus has shifted. Digital utopianism suddenly looks ridiculous. The old dot-com evangelists have begun to lose their flock. The mood has darkened. Nazis, bots, trolls, fake news, data mining—this is what we talk about when we talk about the internet now.
“[Race is] just very hard to control for in their testing,” Garvie explained. “There haven’t been enough public studies, but the limited research that has been done does suggest that the algorithms may have different accuracy rates depending on the race of the subject.” Garvie and her colleagues believe that NIST is well positioned to help with such studies. The agency, part of the U.S. Department of Commerce, conducts voluntary tests with facial recognition companies every four years and is testing for variances in results by country of origin—which Garvie notes “can be a good proxy for race.” NIST’s tests won’t come soon enough for Lynch, however. His case is currently playing out in the Florida First District Court of Appeal. The clock also can’t be turned back for others like him, who may have been unfairly tried as a result of less than perfect software without transparent standards for its use.
Facial recognition software raises many questions that need clear answers. Obtaining those answers will take more than commissioning studies, as vital as they are. It’s also essential that laws catch up with the technology, in order to provide people like Lynch with the opportunity to know the tools that are being used against them. Most importantly, we need to take a closer look at who’s making these algorithms—and how they’re doing it.
As long as it lasts, playfulness extends an invitation to another reality. We are here, but there is also somewhere else we can go. Come with me? As soon as play ends you can tell. When two dogs stop “biting” and start biting, you separate them, fast. They might not know why they switched. Then again, a philosopher might not know whether he is playing with his cat, or whether his cat is playing with a philosopher. The instinct to play may be a deeper instinct for abstraction than language—an older way to make new realities. Making new realities may be the first way that humans, and other animals, sort ourselves into teams.
In 1938, a few years before the Nazis locked him up, the Dutch scholar Johan Huizinga wrote a book about this. He called it Homo Ludens, “the playing man,” a joke that is also an argument. Huizinga argued that all of culture comes from the human instinct to make up rules and follow them, together, and against one another. Enter: athletes, music, courtship, war.