text
stringlengths
0
3.45k
Last year, we decided to start a magazine. We felt disappointed by the mainstream conversation about technology, and we wanted to create space for an alternative. Then Trump happened. At first, nothing seemed to matter. Why publish a magazine about technology in the age of Trump? Why write or read about anything but Trump? We thought about it, and came up with a couple of reasons.
The first one is easy: Fuck Trump. We have to fight him, and we will. But we’re not going to let him monopolize our bandwidth. That’s what he wants. Trump is the hideous id of the internet. He will do anything for our clicks—and he wants all of our clicks. But modern operating systems have made us good multitaskers. We can walk and chew gum and still slay the dragon.
The second reason is more important: Trump is a wake-up call. His election proves that many of the wise men and women who claim to understand how our world works have no clue. He is a reminder that we can’t afford to have stupid conversations about important things anymore. Trump is a creature of technology. A technologized world created the conditions for Trumpism. Social dislocations caused by automation helped create his base, and social media incubated and propagated his fascist rhetoric.
Technology will continue to serve our Troll-in-Chief, as his Administration and his allies in Congress hunt undesirables with databases and drones. Some of their barbarisms will be dramatic and highly visible. Others will be piecemeal, discreet, and hard to grasp immediately. We need intelligent writing to make both forms of brutality legible. We also need to build the tools that can help us disrupt and dismantle Trump’s agenda.
The era of Trump will be a technologized era, like the one before it. We will be paying attention—and proposing ways to resist. (Along these lines, stay tuned for Tech Against Trump, a short book coming soon from Logic.) We will also be in the streets. We hope you’ll join us. Yours, Jim Fingal Christa Hartsock Ben Tarnoff Xiaowei Wang Moira Weigel Technology is a scourge!, technophobic scolds tweet from their iPhones. Technology will fix it!, techno-utopians proclaim on Medium. Indistinguishable middle-aged men tout indistinguishable products in front of a press all too eager to write gadget reviews indistinguishable from ad copy. Companies tank; companies IPO. Legacy media editors commission lifestyle pieces about CEO sneakers and office cafeterias brimming with lacinato kale. Press releases are distributed and regurgitated on TechCrunch. Academics write screeds against Facebook and post them on… Facebook.
We can’t stop watching. We’re so, so bored. Tech is magic. Tech lets us build worlds and talk across oceans. Whatever kind of freak we are—and most of us are several kinds—tech helps us find other freaks like us. But most tech writing is shallow and pointless. It’s nobody’s fault; everyone is just doing their job. Communications teams feed reporters winning anecdotes about their founders that explain exactly nothing. (“One day, Chad was eating a ham sandwich and realized…”) The reporters are overworked and underpaid and need to file a new story by EOD. Who can blame them for taking the bait? Editors are desperate for shareable content, which too often means some kind of caricature. Tech is either brilliant or banal, heroic or heinous. The best minds of our generation are either curing cancer, or building a slightly faster way to buy weed. The robots will either free us from drudgery or destroy civilization. Hate-click or like-click, the stories tend to be about a handful of people. The pasty boy genius. The tragic token woman. The fascist billionaire. The duo of white dudes dueling to lead us to Mars.
We deserve a better conversation. By “we,” we mean you, because everyone uses technology. We are all both its subject and object. Tech is how you find the place you live. It’s how you turn your car into a taxi or your spare room into a hotel. Tech lets you see the faces of the people you love from thousands of miles away. It helps you buy clothes, and track the steps you take trying to fit into them. You use tech to order your dinner, find a date, or at least stream the video you masturbate to when you don’t have the energy to go out.
Someday, when you do swipe right on that special someone, and they swipe right on you—it’s a match!—tech will shape how you flirt, how you define the relationship, how you plan and brag about your wedding. When you have children, you will use tech to track their development and find a babysitter. By then, maybe the babysitters will be robots, and school will be software. As we grow old, tech will help you find a caregiver. Your children will manage your physical decay remotely via app. When you have a bowel movement, they will receive a push notification. They will know what to do. They will have been on Instagram since they were a twinkle in the first fetal ultrasound photo you posted. The NSA will have been spying on them since before they were born.
The stakes are high, is what we are saying. Like you, we are both insiders and outsiders. Luckily, this is exactly the position you need to be in to observe and describe a system. In the social sciences they call it “logic”: the rules that govern how groups of people operate. For engineers, “logic” refers to the rules that govern paths to a goal. In the vernacular, “logic” means something like being reasonable. Logic Magazine believes that we are living in times of great peril and possibility. We want to ask the right questions. How do the tools work? Who finances and builds them, and how are they used? Whom do they enrich, and whom do they impoverish? What futures do they make feasible, and which ones do they foreclose? We’re not looking for answers. We’re looking for logic.
In the beginning, my plan was perfect. I would meditate for five minutes in the morning. Each evening before bed, I would do the same. There was only one catch: instead of relying on my own feelings, a biofeedback device would study my brainwaves to tell me whether I was actually relaxed, focused, anxious, or asleep.
By placing just a few electrodes on my scalp to measure its electrical activity, I could use an electroencephalography (EEG) headset to monitor my mood. Whereas “quantified self” devices like the Fitbit or Apple Watch save your data for later, these headsets loop your brainwaves back to you in real time so that you can better control and regulate them.
Basic EEG technology has been around since the early twentieth century, but only recently has it become available in portable, affordable, and Bluetooth-ready packages. In the last five years, several start-ups—with hopeful names like Thync, Melon, Emotiv, and Muse—have tried to bring EEG devices from their clinical and countercultural circles into mainstream consumer markets.
I first learned about the headsets after watching the Muse’s creator give a TEDx talk called “Know thyself, with a brain scanner.” “Our feelings about how we’re feeling are notoriously unreliable,” she told the audience, as blue waves and fuzzy green patches of light flickered on a screen above her. These were her own brainwaves—offered, it seemed, as an objective correlative to the mind’s trickling subjectivity.
Their actual meaning was indecipherable, at least to me. But they supported a sales pitch that was undeniably attractive: in the comfort of our own homes, without psychotropic meds, psychoanalysis, or an invasive operation, we could bring to light what had previously been unconscious. That was, in any case, the dream.
Day One When I first placed the Muse on my head one Sunday evening in late October, I felt as though I was greeting myself in the future. A thin black band, lightweight and plastic, stretched across my forehead. Its wing-like flanks fit snugly behind my ears. Clouds floated by on the launch screen of its accompanying iPhone app. The Muse wasn’t just a meditation device, the app explained, but a meditation assistant. For some minutes, my initial signal was poor.
To encourage me not to give up before I’d even started, the assistant kept talking to me. Her calming female voice told me how to delicately re-adjust the electrodes to get the signal working. The ones behind my ears were having trouble aligning with the shape of my head. Eventually, the Muse accurately “sensed” my brain. It would now be able to interpret my brainwaves and translate their frequencies into audio cues, which I would hear throughout my meditation session.
Tap the button, she said encouragingly. “I’m ready,” I clicked, and my first five-minute meditation session began. Inward bound, I sat at my desk with the lamp on and closed my eyes. Waves crashed loudly on the shore, which indicated that I was thinking too much. But from time to time, I could hear a few soft splashes of water and, farther in the distance, the soft chirping of birds.
After what seemed like forever, it was over. Like all self-tracking practices (and rather unlike a typical meditation session), it seemed that the post-game was as important as the practice itself. Knowing this, I made a good-faith effort to pore over my “results.” They were, at first, second, and third glance, impenetrable. I had earned 602 calm points. In all my experiences counting—the miles of my runs, words on my documents, even the occasional calorie—I had never learned what a “calm point” was. The units used by the Muse seemed not only culturally insignificant, but void of any actual meaning. In an effort to build an internal chain of signification, the app had mysteriously multiplied these calm points by a factor of three, whereas my “neutral points” had only been multiplied by a factor of one. Birds, I was told, had “landed” sixteen times.
Equally inscrutable were the two awards I had earned. Whatever they were, I thought, they were hardly deserved, considering I had so far spent a total of seven minutes scanning my brain. One was for tranquility—“Being more than 50% calm in a single session must feel good,” the award told me. The other was a “Birds of Eden Award.” I was told that I had earned this because at least two birds chirped per minute, “which must have felt a bit like being at Birds of Eden in South Africa—the largest aviary in the world.” Not really, I thought. But then again, I had never been to South Africa.
It felt great to meditate for the first time only to be told that I was already off to a good start. But I knew deep down—or, at least, I thought I knew—that I had not felt calm during any part of the session. I was in the difficult position, then, of either accepting that I did not know myself, in spite of myself, or insisting on my own discomfort in order to prove the machine wrong. It wasn’t quite that the brain tracker wanted me to know myself better so much as it wanted me to know myself the way that it knew me.
The Brain Doctor The second morning of my experiment, I took the subway uptown to see Dr. Kamran Fallahpour, an Iranian-American psychologist in his mid-fifties and the founder of the Brain Resource Center. The Center provides patients with maps and other measures of their cognitive activity so that they can, ideally, learn to alter it.
Some of Fallahpour’s patients suffer from severe brain trauma, autism, PTSD, or cognitive decline. But many have, for lack of a better word, normal-seeming brains. Athletes, opera singers, attorneys, actors, students—some of them as young as five years old—come to Fallahpour to improve their concentration, reduce stress, and “achieve peak performance” in their respective fields.
Dr. Fallahpour’s offices and labs lie on the ground floor of a heavy stone apartment building on the Upper West Side. When I arrived, he was in the middle of editing a slideshow on brain plasticity for a talk he was due to give at an Alzheimer’s conference. On a second, adjacent monitor, sherbet peaks and colored waves—presumably from some brain—flowed on the screen.
Fallahpour wears bold glasses with thick-topped frames, in the style of extra eyebrows. When we met, he was dressed in a dark blue suit to which was affixed a red brooch shaped like a coral reef or a neural net—I kept meaning to ask which. An enthusiastic speaker with a warm bedside manner, it was hard to shake the impression that there was nothing he would rather be doing than answering my questions about the growing use of personal EEG headsets.
His staff had not yet arrived that morning, he apologized, so he would be fielding any calls himself. As if on cue, the phone rang. “No. Unfortunately, we do not take insurance,” he told the caller. “That happens a lot,” he explained, after hanging up. “Now, where were we?” Before turning to brain stimulation technologies, Fallahpour worked for many years as a psychotherapist, treating patients with traditional talk therapy. His supervisors thought he was doing a good job, and he saw many of his patients improve. But the results were slow-going. He often got the feeling that he was only “scratching the surface” of their problems. Medication worked more quickly, but it was imprecise. Pills masked the symptoms of those suffering from a brain injury, but they did little to improve the brain’s long-term health.
Like many of his fellow researchers, Fallahpour was interested in how to improve the brain through conditioning, electrical and magnetic stimulation, and visual feedback. He began to work with an international group of neuroscientists, clinicians, and researchers developing a database of the typical brain. They interviewed thousands of normal patients—“normal” was determined by tests showing the absence of known psychological disorders—and measured their resting and active brainwaves, among other physiological responses, to establish a gigantic repository of how the normative brain functioned.
Neuroscience has always had this double aim: to know the brain and to be able to change it. Its method for doing so—“screen and intervene”—is part of the larger trend toward personalized medicine initiatives. Advance testing, such as genomics, can target patients at risk for diabetes, cancer, and other diseases. With the rise of more precise diagnostic and visualization technologies, individuals can not only be treated for current symptoms, but encouraged to prevent future illnesses.
Under the twenty-first century paradigm of personalized medicine, everyone becomes a “potential patient.” This is why the Brain Resource Center sees just as many “normal” patients as symptomatic ones. And it’s why commercial EEG headsets are being sold to both epileptics trying to monitor their symptoms and office workers hoping to work better, faster.
Brain training is seductive because its techniques reinforce an existing neoliberal approach: health becomes a product of personal responsibility; economic and environmental causes of illness are ignored. Genetics may hardwire us in certain ways, the logic of neuroliberalism goes, but hard work can make us healthy. One problem is that the preventative care of the few who can afford it gets underwritten by the data of the many.
Consider Fallahpour’s boot camp for elementary school kids. For a few hours each day during school vacations, the small rooms of his low-ceilinged offices are swarmed with well-behaved wealthy children playing games to “improve brain health and unlock better function,” as well as to acquire a “competitive advantage.” “We tune their brain to become faster and more efficient,” he explained. “The analogy is they can have Windows 3.1 or upgrade it to 10.” Before I had time to contemplate the frightening implications of this vision, the phone began, again, to ring. Fallahpour exchanged pleasantries for a few minutes, asking about the caller’s weekend. No, he told them, he did not take insurance.
Bettering Myself The more I thought about the kind of cognitive enhancement Fallahpour promised, the more trouble I had remembering the last time I felt clear-eyed and focused. Had I ever been? Could I ever be? For a few days I had sensed a dull blankness behind my eyes. I wondered if it was a head cold, or sleep deprivation, or a newfound gluten allergy. I started sleeping more and my cold improved, but the brain fog continued. Reading a book felt like standing on a subway grate, with holes and winds weaving through the pages. I misspelled words, like “here” (hear) and “flourish” (fluorish). I waved to a man on the train who looked like someone I had dated years ago.
On a good day, I convinced myself, there was no way I was operating above sixty percent, maybe sixty-five. Sixty percent of what, I wasn’t sure. But I knew I could do better. I felt a twinge of envy toward those who had achieved the mythical “peak performance,” and I redoubled my commitment to self-improvement.
The headset remained subtly encouraging. “Whatever you’re experiencing right now is perfect,” my meditation assistant assured me—just moments before my fourth session’s calibration had paused, again, because the signal quality was too low. I re-adjusted my headset, practicing patience. “Training your mind is kind of like training a puppy,” the motivational preamble continued. “Getting angry at the puppy isn’t going to get you anywhere.” I wasn’t angry at anyone’s dog, but I couldn’t stop comparing each session’s score to the last’s. Was I hearing fewer birds? Was it easier to focus with or without caffeine? As suspicious as I was about the accuracy of the metrics, I still wanted to beat my previous score. The more elusive peak performance seemed, the more I came to realize that it was structured as an essentially nostalgic feeling. It relied on the fear that you used to be younger, sharper, more clear-eyed—and the hope that you could somehow, with practice, be this way again.
When I mentioned my experiments to a friend, he recommended that I watch a performance by the conceptual programmer Sam Lavigne. In “Online Shopping Center,” Lavigne trains a DIY EEG device to identify whether his brain is thinking about shopping online or his own mortality. Being either “shopping-like” or “death-like” was not so different, it seemed, from the Muse letting me know whether I was calm or active, focused or restless. In both cases, the data was mostly junk, the binaries reductive, the exercise absurd.
Test Subject When I went to see Dr. Fallahpour for a follow-up visit, I was running thirty minutes late. I had forgotten to transfer trains at 59th Street because I had gotten distracted trying to make sense of an advertisement chastening me for my distraction: “Daydreamed through your stop, again?” it asked.
I had, but I wouldn’t know it yet. It wasn’t until 116th Street that I realized I had missed my stop—in fact, I had missed several. Still, I felt a low current of satisfaction when I emerged into the sunlight at 125th Street, far from where I needed to be. Such inattention made me a more viable patient for brain training than I had previously realized, in need of greater focus for even the most elementary tasks. It had also made me very late.
Fallahpour and I decided I would try a calm protocol first, followed by one that rewarded my brain for focus. While he gelled the electrodes and placed them on my scalp, I asked him about some of the skepticism surrounding EEG headsets—namely, the fact that many people, myself included, found it difficult to tell what exactly was being measured.
“EEG is a crude tool and it isn’t the best we have, but it’s the most convenient in many ways,” he explained. “It’s prone to a lot of ‘garbage in and out.’” But when done correctly, he added, it can be “useful and quite powerful.” Deciphering signals from the noise required the trained judgement of an expert like Fallahpour. In this sense, the EEG’s biofeedback wasn’t quite as seamless as going to the gym with your Fitbit. You still needed someone to help you help yourself.
To start, we took a baseline measurement of my brain. I had very quick recovery, or response, or something, in terms of what I think were my alpha waves. This meant that my ability to calm myself was sophisticated. I felt surprised at first, and dumbly flattered, much like I had during my first session with the Muse.
For the calm protocol, classical music cut in and out of my headphones depending on whether certain frequencies in my brain were active. This was visualized by red and blue columns flanking both sides of the screen. I was supposed to keep the colors under certain thresholds in their respective containers. At one point, I opened my eyes. The blue column, which had been filling up, drained suddenly. This was supposed to be a sign of resilience.
When we tested my concentration, the settings were adjusted to exercise different kinds of brainwaves. I was tasked with keeping a blue column at a certain level while not letting other red columns reach a certain height. It was more difficult than meditating with my Muse—but also, because it was a game, more enjoyable. After five minutes, I convinced myself that I felt my mind becoming more elastic, more responsive. I had started to figure out how to modify my patterns in order to play the music, even if I did not quite know what those patterns meant.
Know Nothing By the end of my week with the Muse, my results were as perilously inscrutable as they had been at the start. Thousands of birds had chirped in my ear. An infinity of waves had crashed upon an endless shore. I had earned quite a few more badges, some by the sheer virtue of persisting: adjusting the signal, continuing the exercise day after day, not quitting in the face of a great and useless mystery.
I had learned very little about myself. This in itself wasn’t surprising. But if the EEG headsets were supposed to teach anything, their lesson was somewhat contradictory: I should know myself, but I should also be prepared to be wrong about what I knew. In this respect, the headset was more like the Oracle of Delphi’s famous precept, “Know thyself,” than its designers had intended. The dictum was initially issued as a double-edged warning about the limits of knowing and the incompleteness of interpretation—a truth that the Muse, ironically enough, confirmed.
The more I parsed my personal graphs and charts, the more I arrived at the same conclusions as anyone who has ever taken more than a passing glance at the brain. Our tools aren’t good enough. At least not yet. And the inadequate and embarrassing analogies we use to describe our brains do little to help us see ourselves clearly. In the course of the week, mine had been variously compared to a loom, a digital machine, an obsolete Windows operating system, and a puppy. What had I been expecting? That a toy would illuminate the fog? Commercial EEG devices promise that we can know our brain frequencies, even while most of those frequencies are “garbage.” The machines might be too. Average EEG devices like the Muse have been shown to have trouble distinguishing between the signals of a relaxed brainwave, stray thought, skin pulse, or furrowed brow. And several studies have disproven the efficacy of related “brain training” games, which don’t augment intelligence so much as make people better at playing the game. It’s possible that all the Muse taught me was how to score calm points and charm songbirds, not how to unlock inner bliss.
When the next Sunday came around, I was just as anxious about relaxation and relaxed about anxiety as I had been the week before. I still didn’t know whether I wanted to go shopping. Other times I thought I was thinking about death, though I couldn’t be sure. Who knows, maybe I would never know. I might even die that way—knowing very little, and getting that part wrong too.
As the Trump Administration enters its first hundred days, the 2016 election and its unexpected result remains a central topic of discussion among journalists, researchers, and the public at large. It is notable the degree to which Trump’s victory has propelled a broader, wholesale evaluation of the defects of the modern media ecosystem. Whether it is “fake news,” the influence of “filter bubbles,” or the online emergence of the “alt-right,” the internet has been cast as a familiar villain: enabling and empowering extreme views, and producing a “post-fact” society.
This isn’t the first time that the internet has figured prominently in a presidential win. Among commentators on the left, the collective pessimism about the technological forces powering Trump’s 2016 victory are matched in mirror image by the collective optimism about the technological forces driving Obama’s 2008 victory. As Arianna Huffington put it simply then, “Were it not for the Internet, Barack Obama would not be president. Were it not for the Internet, Barack Obama would not have been the nominee.” But whereas Obama was seen as a sign that the new media ecosystem wrought by the internet was functioning beautifully (one commentator praised it as “a perfect medium for genuine grass-roots political movements”), the Trump win has been blamed on a media ecosystem in deep failure mode. We could chalk these accounts up to simple partisanship, but that would ignore a whole constellation of other incidents that should raise real concerns about the weaknesses of the public sphere that the contemporary internet has established.
This troubled internet has been around for years. Fears about filter bubbles facilitating the rise of the alt-right can and should be linked to existing concerns about the forces producing insular, extreme communities like the ones driving the Gamergate controversy. Fears about the impotence of facts in political debate match existing frustrations about the inability for documentary evidence in police killings—widely distributed through social media—to produce real change. Similarly, fears about organized mobs of Trump supporters systematically silencing political opponents online are just the latest data point in a long-standing critique of the failure of social media platforms to halt harassment.
The Wise Crowd The Obama and Trump elections might be read as the bookends of a story about the impact of the internet on society. How do we size up the nearly ten years between 2008 and 2016? How do we understand what happened on the internet during that time, and the ripple effect it had on the public sphere? We can tell this story through investments, companies, and acquisitions, and the threading together of the worlds of technology and the media. But doing so might miss the forest for the trees. We would miss the fact that ideology is embedded in code, and that there is a deeper story about the aspirations of the internet at the end of the first decade of the 21st century.
One critical anchor point is the centrality of the wisdom of the crowd to the intellectual firmament of Web 2.0: the idea that the broad freedom to communicate enabled by the internet tends to produce beneficial outcomes for society. This position celebrated user-generated content, encouraged platforms for collective participation, and advocated the openness of data.
Inspired by the success of projects like the open-source operating system Linux and the explosion of platforms like Wikipedia, a generation of internet commentators espoused the benefits of crowd-sourced problem-solving. Anthony D. Williams and Don Tapscott’s Wikinomics (2006) touted the economic potential of the crowd. Clay Shirky’s Here Comes Everybody (2008) highlighted how open systems powered by volunteer contributions could create social change. Yochai Benkler’s The Wealth of Networks (2006) posited a cooperative form of socioeconomic production unleashed by the structure of the open web called “commons-based peer production.” Such notions inspired movements like “Gov 2.0” and projects like the Sunlight Foundation, which sought to publish government data in order to reduce corruption and enable the creation of valuable new services by third parties. It also inspired a range of citizen journalism projects, empowering a new fourth estate.
Faith in the collective intelligence of the crowd didn’t go unchallenged. Contemporary authors like Andrew Keen railed against the diminishing role of experts in The Cult of the Amateur (2007). Jaron Lanier’s You Are Not A Gadget (2010) warned of individual intelligence being replaced by the judgment of crowds and algorithms. Eli Pariser’s The Filter Bubble (2011) expressed anxiety about the isolating effect of recommendation systems that created information monocultures. Evgeny Morozov’s The Net Delusion (2012) attacked the notion of the internet as a democratizing force.
Yet regardless of the critics, the belief in the wisdom of the crowd framed the design of an entire generation of social platforms. Digg and Reddit—both empowered by a system of upvotes and downvotes for sharing links—surfaced the best new things on the web. Amazon ratings helped consumers sort through a long inventory of products to find the best one. Wikis proliferated as a means of coordination and collaboration for a whole range of different tasks. Anonymous represented an occasionally scary but generative model for distributed political participation. Twitter—founded in 2006—was celebrated as a democratizing force for protest and government accountability.
Intelligence Failure The platforms inspired by the “wisdom of the crowd” represented an experiment. They tested the hypothesis that large groups of people can self-organize to produce knowledge effectively and ultimately arrive at positive outcomes. In recent years, however, a number of underlying assumptions in this framework have been challenged, as these platforms have increasingly produced outcomes quite opposite to what their designers had in mind. With the benefit of hindsight, we can start to diagnose why. In particular, there have been four major “divergences” between how the vision of the wisdom of the crowd optimistically predicted people would act online and how they actually behaved.
First, the wisdom of the crowd assumes that each member of the crowd will sift through information to make independent observations and contributions. If not, it hopes that at least a majority will, such that a competitive marketplace of ideas will be able to arrive at the best result. This assumption deeply underestimated the speed at which a stream of data becomes overwhelming, and the resulting demand for intermediation among users. It also missed the mark as to how platforms would resolve this: by moving away from human moderators and towards automated systems of sorting like the Facebook Newsfeed. This has shifted the power of decision-making from the crowd to the controllers of the platform, distorting the free play of contribution and collaboration that was a critical ingredient for collective intelligence to function.
Second, collective intelligence requires aggregating many individual observations. To that end, it assumes a sufficient diversity of viewpoints. However, open platforms did not generate or actively cultivate this kind of diversity, instead more passively relying on the ostensible availability of these tools to all.
There are many contributing causes to the resulting biases in participation. One is the differences in skills in web use across different demographics within society. Another is the power of homophily: the tendency for users to clump together based on preferences, language, and geography—a point eloquently addressed in Ethan Zuckerman’s Digital Cosmopolitans (2014). Finally, activities like harassment and mob-like “brigading” proved to be effective means of chilling speech from targeted—and often already vulnerable—populations on these platforms.
Third, collective intelligence assumes that wrong information will be systematically weeded out as it conflicts with the mass of observations being made by others. Quite the opposite played out in practice, as it ended up being much easier to share information than to evaluate its accuracy. Hoaxes spread very effectively through the crowd, from bogus medical beliefs and conspiracy theories to faked celebrity deaths and clickbait headlines.
Crowds also arrived at incorrect results more often than expected, as in the high-profile misidentification of the culprits by Reddit during the Boston Marathon bombing. The failure of the crowd to eliminate incorrect information, which seemed sufficiently robust in the case of something like Wikipedia, did not apply to other contexts.
Fourth, collective intelligence was assumed to be a vehicle for positive social change because broad participation would make wrongdoing more difficult to hide. Though this latter point turned out to be arguably true, transparency alone was not the powerful disinfectant it was assumed to be. The ability to capture police violence on smartphones did not result in increased convictions or changes to the underlying policies of law enforcement. The Edward Snowden revelations failed to produce substantial surveillance reform in the United States. The leak of Donald Trump’s Access Hollywood recording failed to change the political momentum of the 2016 election. And so on. As Aaron Swartz warned us in 2009, “reality doesn’t live in the databases.” Ultimately, the aspirations of collective intelligence underlying a generation of online platforms proved far more narrow and limited in practice. The wisdom of the crowd turned out to be susceptible to the influence of recommendation algorithms, the designs of bad actors, in-built biases of users, and the strength of incumbent institutions, among other forces.
The resulting ecosystem feels deeply out of control. The promise of a collective search for the truth gave way to a pernicious ecosystem of fake news. The promise of a broad participatory culture gave way to campaigns of harassment and atomized, deeply insular communities. The promise of greater public accountability gave way to waves of outrage with little real change. Trump 2016 and Obama 2008 are through-the-looking-glass versions of one another, with the benefits from one era giving rise to the failures of the next.
Reweaving the Web So, what comes next? Has a unique moment been lost? Is the ecosystem of the web now set in ways that prevent a return to a more open, more participatory, and more collaborative mode? What damage control can be done on our current systems? It might be tempting to take the side of the critics who have long claimed that the assumptions of collective intelligence were naive from their inception. But this ignores the many positive changes these platforms have brought. Indeed, staggeringly successful projects like Wikipedia disprove the notion that the wisdom of the crowd framework was altogether wrong, even as an idealized picture of that community has become more nuanced with time.
It would also miss the complex changes to the internet in recent years. For one, the design of the internet has changed significantly, and not always in ways that have supported the flourishing of the wisdom of the crowd. Anil Dash has eulogized “the web we lost,” condemning the industry for “abandon[ing] core values that used to be fundamental to the web world” in pursuit of outsized financial returns. David Weinberger has characterized this process as a “paving” of the web: the vanishing of the values of openness rooted in the architecture of the internet. This is simultaneously a matter of code and norms: both Weinberger and Dash are worried about the emergence of a new generation not steeped in the practices and values of the open web.
The wisdom of the crowd’s critics also ignore the rising sophistication of those who have an interest in undermining or manipulating online discussion. Whether Russia’s development of a sophisticated state apparatus of online manipulation or the organized trolling of alt-right campaigners, the past decade has seen ever more effective coordination in misdirecting the crowd. Indeed, we can see this change in the naivety of creating open polls to solicit the opinions of the internet or setting loose a bot to train itself based on conversations on Twitter. This wasn’t always the case—the online environment is now hostile in ways that inhibit certain means of creation and collaboration.
To the extent that the vision of the wisdom of the crowd was naive, it was naive because it assumed that the internet was a spontaneous reactor for a certain kind of collective behavior. It mistook what should have been an agenda, a ongoing program for the design of the web, for the way things already were. It assumed users had the time and education to contribute and evaluate scads of information. It assumed a level of class, race, and gender diversity in online participation that never materialized. It assumed a gentility of collaboration and discussion among people that only ever existed in certain contexts. It assumed that the simple revelation of facts would produce social change.
In short, the wisdom of the crowd didn’t describe where we were, so much as paint a picture of where we should have been going. Fulfilling those failed aspirations will require three major things. Platforms must actively protect the crowd’s production of wisdom. The visibility of collective decision-making and the drama of mass action online produces the illusion of strength. In reality, the blend of code and community giving rise to sustainable collective intelligence is a delicate and elusive set of human dynamics. Rather than assuming its inevitability, we should build systems—either human-driven or autonomous—for robustly shielding and cultivating these processes in the harsh environment of the web.
The mission needs to be drawn broader than code. Ensuring that the wisdom of the crowd can produce social change means creating pathways for offline action that can effectively challenge wrongdoing. Ensuring that the wisdom of the crowd can reach accurate results requires more inclusive, diverse bodies of participants. Both speak to a political agenda that cannot be achieved merely by designing tools and making them openly available.
Experimentation must be accelerated at the edges. Although we depend heavily on a few key platforms, the internet is still a vast space. Today’s platforms emerged from experimentation at the edges. To produce new generation of robust platforms, we need more experimentation—a proliferation and wide exploration of alternative spaces for crowds to gather online.
It remains an open question whether the internet is traveling down the same, well-worn paths followed by all communications infrastructures, or whether it represents something truly new. But to accept the current state of affairs as inevitable falls prey to a fatalistic pessimism that would only further compound the problems created by the equally deterministic optimism of the decade past.
The vision of collective participation embedded in the idea of the wisdom of the crowd rests on the belief in the unique potential of the web and what it might achieve. Even as the technology evolves, that vision—and a renewed defense of it—must guide us as we enter the next decade. Technology has a gender problem, as everyone knows. The underrepresentation of women in technical fields has spawned legions of TED talks, South by Southwest panels, and women-friendly coding boot camps. I’ve participated in some of these get-women-to-code workshops myself, and I sometimes encourage my students to get involved. Recently, though, I’ve noticed something strange: the women who are so assiduously learning to code seem to be devaluing certain tech roles simply by occupying them.
It’s not always obvious to outsiders, but the term “technology sector” is a catch-all for a large array of distinct jobs. Of course there are PR, HR, and management roles. But even if we confine ourselves to web development, technical people often distinguish among “front-end,” “back-end,” and “full-stack” development. The partition between the two “ends” is the web itself. There are people who design and implement what you see in your web browser, there are people who do the programming that works behind the scenes, and there are people who do it all.
In practice, the distinction is murky: some developers refer to everything user-facing as the front-end, including databases and applications, and some developers use front-end to mean only what the user sees. But while the line shifts depending on who you’re talking to, most developers acknowledge its existence.
I spoke to a number of developers who confirmed something I’d sensed: for some time, the technology industry has enforced a distinct hierarchy between front-end and back-end development. Front-end dev work isn’t real engineering, the story goes. Real programmers work on the back-end, with “serious” programming languages. Women are often typecast as front-end developers, specializing in the somehow more feminine work of design, user experience, and front-end coding.
Are women really more likely to be front-end developers? Numbers are hard to pin down. Most studies consider the tech sector as a single entity, with software engineers lumped together with HR professionals. A 2016 StackOverflow user survey showed that front-end jobs—“Designer,” “Quality Assurance,” and “Front-End Web Developer”—were indeed the top three titles held by women in the tech industry, although that survey itself has some problems.
We need better numbers, as feminist developers have been saying for years, but it also doesn’t seem like a huge stretch to take developers at their word when they say that front-end development is understood to occupy the girlier end of the tech spectrum. Front-end developers, importantly, make about $30,000 less than people in back-end jobs like “DevOps” engineers, who work on operations and infrastructure, according to the salary aggregation site Glassdoor.
Sorting the Stack The distinction between back and front wasn’t always so rigid. “In the earliest days, maybe for the first ten years of the web, every developer had to be full-stack,” says Coraline Ada Ehmke, a Chicago-based developer who has worked on various parts of the technology stack since 1993. “There wasn’t specialization.” Over time, web work professionalized. By the late 2000s, Ehmke says, the profession began to stratify, with developers who had computer science degrees (usually men) occupying the back-end roles, and self-taught coders and designers slotting into the front.
For many people who are teaching themselves to code, front-end work is the lowest-hanging fruit. You can “view source” on almost any web page to see how it’s made, and any number of novices have taught themselves web-styling basics by customizing WordPress themes. If you’re curious, motivated, and have access to a computer, you can, eventually, get the hang of building and styling a web page.
Which is not to say it’s easy, particularly at the professional level. A front-end developer has to hold thousands of page elements in her mind at once. Styles overwrite each other constantly, and what works on one page may be disastrous on another page connected to the same stylesheet. Front-end development is taxing, complex work, and increasingly it involves full-fledged scripting languages like JavaScript and PHP.
“Serious” developers often avoid acknowledging this by attributing front-end expertise not to mastery but to “alchemy,” “wizardry,” or “magic.” Its adepts don’t succeed through technical skill so much as a kind of web whispering: feeling, rather than thinking, their way through a tangle of competing styles.
“There’s this perception of it being sort of a messy problem that you have to wrangle with systems and processes rather than using your math-y logic,” says Emily Nakashima, a full-stack developer based in San Francisco. That’s not true, of course; nothing on a computer is any more or less logical than anything else. But perhaps it’s easier to cast women in a front-end role if you imbue it with some of the same qualities you impute to women.
The gendered attributes switch as you travel to the back of the stack. At the far end, developers (more often “engineers”) are imagined to be relentlessly logical, asocial sci-fi enthusiasts; bearded geniuses in the Woz tradition. Occupations like devops and network administration are “tied to this old-school idea of your crusty neckbeard dude, sitting in his basement, who hasn’t showered in a week,” says Jillian Foley, a former full-stack developer who’s now earning her doctorate in history. “Which is totally unfair! But that’s where my brain goes.” The Matriarchy We Lost The brilliant but unkempt genius is a familiar figure in the history of computing—familiar, but not immutable. Computing was originally the province of women, a fact innumerable articles and books have pointed out but which still seems to surprise everyone every time it’s “revealed.” The bearded savant of computer science lore was the result of the field’s professionalization and increasing prestige, according to the computing historian Nathan Ensmenger.
“If you’re worried about your professional status, one way to police gender boundaries is through educational credentials,” says Ensmenger. “The other way, though, is genius. And that’s something I think nerd culture does really well. It’s a way of defining your value and uniqueness in a field in which the relationship between credentials and ability is kind of fuzzy.” And “genius,” of course, is a strongly male-gendered attribute—just look at teaching evaluations.
When programming professionalized, women got pushed out. Marie Hicks, a computing historian who’s looked closely at this phenomenon, explains that as programming came to be viewed as more important to national and corporate welfare, hiring managers began associating it with a specific set of skills. In the British case, Hicks’s specialty, a good programmer was supposed to be the ultimate systems-thinker, able to see and synthesize the big picture. In the United States, as Ensmenger and others have documented, the best programmers were purportedly introverted chess nerds, obsessed with details, logic, and order. (There’s very little evidence that these characteristics actually make a good programmer.) The traits of a “good programmer” differed by country, but they were universally male-gendered, enforced by hiring managers and other programmers who sought to replicate their own characteristics—not consciously, for the most part, but simply because the jobs were important. Hiring managers wanted to bet on qualities everyone agreed were indicators of success. “The people with more prestige in a culture are favored for all sorts of things, including jobs,” says Hicks. “If you have a job that you want to fill, you want to get the best worker for it. So in more prestigious fields, employers are looking for those employees that they think are the best bet. This tends to attract men who are white or upper-class into these more desirable jobs.” People often think that as a profession matures it gets more complex, and thus edges women out because it demands higher-level skills. But “historically, there’s very little to bear that out,” says Hicks, who has uncovered multiple incidents of women programmers training, and then being replaced by, their male counterparts.
The Dangerous Myth of Meritocracy The case of the female front-end developer is flipped in the other direction—it’s a feminizing subfield, rather than a masculinizing one. But it’s governed by many of the same market forces that edged women out of programming in the first place: prestige accrues to labor scarcity, and masculinity accrues to prestige. Front-end jobs are easier for women to obtain, and feminized jobs are less prestigious. In turn, the labor market generates its own circular logic: women are front-end developers because they’re well-disposed to this kind of labor, and we know this because women are front-end developers.
No one says any of this explicitly, of course, which is why the problem of women in technology is thornier than shoehorning women onto all-male panels. The developers I spoke to told me about much more subtle, very likely unconscious incidents of being steered toward one specialization or another. Two different women told me about accomplished female acquaintances being encouraged to take quality assurance jobs, currently one of the least prestigious tech gigs. Ehmke told me about a friend who applied for a back-end developer position. Over the course of the interview, the job somehow morphed into a full-stack job—for which Ehmke’s friend was ultimately rejected, because she didn’t have the requisite front-end skills.
And everyone can rattle off a list of traits that supposedly makes women better front-end coders: they’re better at working with people, they’re more aesthetically inclined, they care about looks, they’re good at multitasking. None of these attributes, of course, biologically inhere to women, but it’s hard to dispute this logic when it’s reinforced throughout the workplace.
Once you’re cast as a front-end developer, it can be challenging to move to different parts of the stack, thus limiting the languages and development practices you’re exposed to. “Particularly in Silicon Valley, there’s a culture of saying developers should always be learning new things,” says Nakashima, the San Francisco-based full-stack developer. Front-end specialization “can be a place that people go to and don’t come back from. They’re working on these creative projects that are in some ways very interesting, but don’t allow them to move to an area of the stack that’s becoming more popular.” Viewed from one angle, the rise of get-girls-to-code initiatives is progressive and feminist. Many people involved in the movement are certainly progressive feminists themselves, and many women have benefited from these initiatives. But there are other ways to look at it too. Women are generally cheaper, to other workers’ dismay. “Introducing women into a discipline can be seen as empowerment for women,” says Ensmenger. “But it is often seen by men as a reduction of their status. Because, historically speaking, the more women in a profession, the lower-paid it is.” An influx (modest though it is) of women into the computing profession might be helping to push developers to make distinctions where they didn’t exist before. “As professions are under threat, stratification is very often the result,” says Ensmenger. “So you take those elements that are most ambiguous and you push those, in a sense, down and out. And down and out means they become more accessible to other groups, like women.” But these roles are also markedly distinct from the main work of software engineering—which is safely insulated from the devaluing effect of feminization, at least for the time being.
Hicks, the computing historian, can’t stand it when people tout coding camps as a solution to technology’s gender problem. “I think these initiatives are well-meaning, but they totally misunderstand the problem. The pipeline is not the problem; the meritocracy is the problem. The idea that we’ll just stuff people into the pipeline assumes a meritocracy that does not exist.” Ironically, says Hicks, these coding initiatives are, consciously or not, betting on their graduates’ failure. If boot camp graduates succeed, they’ll flood the market, devaluing the entire profession. “If you can be the exception who becomes successful, then you can take advantage of all the gatekeeping mechanisms,” says Hicks. “But if you aren’t the exception, and the gatekeeping starts to fall away, then the profession becomes less prestigious.” My students are always so excited that they’re “learning to code” when I teach them HTML and CSS, the basic building blocks of web pages. And I’m happy for them; it’s exhilarating to see, for the first time, how the web is built. Increasingly, though, I feel the need to warn them: the technology sector, like any other labor market, is a ruthless stratifier. And learning to code, no matter how good they get at it, won’t gain them entrance to a club run by people who don’t look like them.
In more ways than one, medicine is dying. A 2015 article in JAMA: The Journal of the American Medical Association suggests that almost a third of medical school graduates become clinically depressed upon beginning their residency training. That rate increases to almost half by the end of their first year.
Between 300 and 400 medical residents commit suicide annually, one of the highest rates of any profession, the equivalent of two average-sized medical school classes. Survey the programs of almost any medical conference and you’ll find sessions dedicated to contending with physician depression, burnout, higher-than-average divorce rates, bankruptcy, and substance abuse.
At the risk of sounding unsympathetic, medicine should be difficult. No other profession requires such rigorous and lengthy training, such onerous and ongoing scrutiny, and the continuous self-interrogation that accompanies saving or failing to save lives. But today’s crisis of physician burnout is the outcome of more than just a job that’s exceptionally difficult. Medicine is undergoing an agonizing transformation that’s both fundamental and unprecedented in its 2500-year history. What’s at stake is nothing less than the terms of the contract between the profession and society.
The Rise of the Electronic Medical Record An electronic medical record, or EMR, is not all that different from any other piece of record-keeping software. A health care provider uses an EMR to collect information about their patient, to describe their treatment, and to communicate with other providers. At times, the EMR might automatically alert the provider to a potential problem, such as a complex drug interaction. In its purest form, the EMR is a digital and interconnected version of the paper charts you see lining the shelves of doctors’ offices.
And if that’s all there were to it, a doctor using an EMR would be no more worrisome than an accountant switching out her paper ledger for Microsoft Excel. But underlying EMRs is an approach to organizing knowledge that is deeply antithetical to how doctors are trained to practice and to see themselves. When an EMR implementation team walks into a clinical environment, the result is roughly that of two alien races attempting to communicate across a cultural and linguistic divide.
When building a tool, a natural starting point for software developers is to identify the scope, parameters, and flow of information among its potential users. What kind of conversation will the software facilitate? What sort of work will be carried out? This approach tends to standardize individual behavior. Software may enable the exchange of information, but it can only do so within the scope of predetermined words and actions. To accommodate the greatest number of people, software defines the range of possible choices and organizes them into decision trees.
Yet medicine is uniquely allergic to software’s push towards standards. Healthcare terminology standards, such as the Systematized Nomenclature of Medicine (SNOMED), have been around since 1965. But the professional consensus required to determine how those terms should be used has been elusive. This is partly because not all clinical concepts lend themselves to being measured objectively. For example, a patient’s pulse can be counted, but “pain” cannot. Qualitative descriptions can be useful for their flexibility, but this same flexibility prevents individual decisions from being captured by even the best designed EMRs.
More acutely, medicine avoids settling on a shared language because of the degree to which it privileges intuition and autonomy as the best answer to navigating immense complexity. One estimate finds that a primary care doctor juggles 550 independent thoughts related to clinical decision-making on a given day. Though there are vast libraries of guidelines and research to draw on, medical education and regulations resist the urge to dictate behavior for fear of the many exceptions to the rule.
Over the last several years, governments, insurance companies, health plans, and patient groups have begun to push for greater transparency and accountability in healthcare. They see EMRs as the best way to track a doctor’s decision-making and control for quality. But the EMR and the physician are so at odds that rather than increase efficiency—typically the appeal of digital tools—the EMR often decreases it, introducing reams of new administrative tasks and crowding out care. The result is a bureaucracy that puts controlling costs above quality and undervalues the clinical intuition around which medicine’s professional identity has been constructed.
Inputting information in the EMR can take up as much as two-thirds of a physician’s workday. Physicians have a term for this: “work after clinic,” referring to the countless hours they spend entering data into their EMR after seeing patients. The term is illuminating not only because it implies an increased workload, but also because it suggests that seeing patients doesn’t feel like work in the way that data entry feels like work.
The EMR causes an excruciating disconnect: from other physicians, from patients, from one’s clinical intuition, and possibly even from one’s ability to adhere faithfully to the Hippocratic oath. And, if the link between using a computer and physician suicide seems like a stretch, consider a recent paper by the American Medical Association and the RAND Corporation, which places the blame for declining physician health squarely at the feet of the EMR.
Drop-down menus and checkboxes not only turn doctors into well-paid data entry clerks. They also offend medical sensibility to its core by making the doctor aware of her place in an industrialized arrangement. From Snowflake to Cog Physicians were once trained through an informal system of apprenticeship. They were overwhelmingly white and male, and there was little in the way of regulatory oversight or public accountability. It was a physician’s privilege to determine who received treatment, and how, and at what cost.

The text of all the articles from Logic Magazine issues 1-18.

logic_raw.txt - The articles are separated by three newlines. Each paragraph is on its own line.

logic_passages.txt - The articles, broken up into passages of between 300 to 2000 characters. Each passage is on its own line.

Downloads last month
0
Edit dataset card