text
stringlengths
0
3.45k
That’s also why the internet reflects such an endless catalog of kink. It offers a space for fetishists to find one another, and make new recruits. Who among us hasn’t stumbled across a perversity that would never have occurred to us, but that, upon reflection, is actually kind of hot? I spent a decent portion of my youth making such discoveries. They happened more frequently in an era when the internet was wilder, before search engines and social platforms banished most of the randomness from our online lives, and the market hadn’t yet conquered every last crevice of the digital sphere. There was no Pornhub, just a bunch of degenerates trafficking fantasies in AOL chatrooms and Usenet newsgroups—not “amateurs” in today’s pornified sense, but actual laypeople, exploring their vernacular with all the awkwardness and exhilaration of a 16th-century German peasant trying to read the Bible for the first time.
The Great Cyberporn Scare On July 3, 1995, a creepy image appeared on the cover of Time. It showed a young boy sitting at a keyboard, his eyes wide, his mouth open. Clearly he was looking at something he wasn’t supposed to see. And that something was “CYBERPORN,” emblazoned across his torso in large block letters. Then, in smaller text, the kicker: “Exclusive: A new study shows how pervasive and wild it really is. Can we protect our kids—and free speech?” Those were the days before “clickbait” and “fake news,” but the Time cover story embodied the spirit of both. The “new study” was a paper by a thirty-year-old undergraduate at Carnegie Mellon named Marty Rimm, who had somehow leveraged the fear and incomprehension around the internet—which, in 1995, was just becoming mainstream—to get himself into the Georgetown Law Journal. Rimm’s article was an unhinged bit of bad science, with a truly bonkers title: “Marketing Pornography on the Information Superhighway: A Survey of 917,410 Images, Descriptions, Short Stories, and Animations Downloaded 8.5 Million Times by Consumers in Over 2,000 Cities in 40 Countries, Provinces and Territories.” The argument was simple: the internet had become saturated with smut, especially the kind that couldn’t be easily obtained at your neighborhood porn shop, like bestiality and pedophilia and other extreme or illegal proclivities.
Catapulted by Time to national prominence, Rimm’s study ignited a moral panic about internet porn. Rimm was soon spouting inflammatory nonsense on ABC’s Nightline and giving interviews to The New York Times. Predictably, the religious right seized on the study to claim that the internet was corrupting American children, and pressed their allies in Congress to crack down. Senator Chuck Grassley obliged, reading Rimm’s report into the Congressional record. And legislative consequences weren’t far behind—the media frenzy helped mobilize bipartisan support for the Communications Decency Act, a draconian effort to suppress online obscenity and indecency. Bill Clinton signed the law into 1996. A year later, the Supreme Court struck down its key provisions as unconstitutional.
By then, Rimm had disappeared. A counterattack by scholars and activists had discredited his research, and succeeded in making the media more skeptical of the idea that the internet was packed to the rafters with pedophiles and horse-fuckers. Digital civil libertarians had organized to defend the internet from Christian fundamentalists and Congressional cretins, and had mostly won. Meanwhile, the journalist who wrote the Time story, Philip Elmer-DeWitt, claims he became “the most hated man on the Internet,” even suffering a denial-of-service attack that took down his ISP’s email servers.
Twenty years later, in 2015, Elmer-DeWitt wrote a mea culpa about the saga for Fortune which involved, among other things, raising $1,656 on Kickstarter to hire a private investigator to track down Rimm. He knocked on Rimm’s door, but Rimm wouldn’t open it. A House With Many Rooms The cyberporn crusaders were flatly wrong. You could certainly find photographs of people having sex with animals on the internet in 1995, and you could certainly find them now. But such content has never dominated the internet—and neither has porn. In their book A Billion Wicked Thoughts, computational neuroscientists Ogi Ogas and Sai Gaddam determined that porn makes up about ten percent of the internet. That’s a very rough estimate, given the methodological difficulties involved. Even so, it offers an important counterpoint to the caricature, often promoted by conservatives, of an internet drenched in sex.
But the crusaders were right to be afraid. The internet may never have been quite as depraved as they imagined, but it did weaken their ability to define and to discipline people’s sexuality. It made certain kinds of censorship more difficult, and vastly expanded the variety of material available to masturbators of all ages. My own experience would presumably strike them as a nightmare scenario: a child quietly mainlining a diverse stream of filth, unimpeded by parents, teachers, or senators.
It would be naive to suggest the internet is always and everywhere an instrument of sexual liberation. It is also a space for abuse and exploitation, for patriarchy and heterosexism, and for much toxic mythmaking about masculinity. But one doesn’t have to be deliriously techno-utopian to acknowledge that the internet has enlarged the possibilities for erotic pleasure.
The most important lesson the internet taught me as a kid was how big sexuality could be—it was, to borrow a Biblical metaphor, a house with many rooms. Some doors I opened and slammed shut; some spaces I entered and never left. There were always more rooms; I could never hope to see the whole house. What a beautiful realization, that sex might mean more than one thing, that it might be even more multiple and elastic than the internet itself.
You are feeling frisky. You are home alone. What do you do? If your blood is pulsing with testosterone, the chances are that you will reach for the laptop and browse around. You will likely click until you find pixels arranged in such a way that an attractive face and body is convincingly simulated on the screen. This digital body is no doubt in a state of nakedness (or near undress) beckoning you with a come-hither expression. It may also be in the midst of being ravished by another two-dimensional body, playing the role of your proxy. Your eyes drink this image in, as you project yourself into the scene. If your system is more infused with estrogen, however, you are more likely to rely on imagination and memory— projecting erotic visions on the insides of your eyelids, even as there is an increasing chance you might succumb to the internet’s infamous capacity to provide “porn on tap.” We all have different libidinal triggers: olive skin, long eyelashes, bangs, freckles, cleavage, ribbons, stiletto heels, stockings, tattoos, abs, beards, and so on, ad infinitum. But the point is that most of these are visual. Given the modern privileging of sight (“seeing is believing”), we tend to neglect, or even forget altogether, that the ear can be one of our most sensitive erogenous zones.
Indeed, psychoanalysts tell us that the ear is often the primary source of the libido, given that we are likely to hear things as children that excite and stimulate us. Even if we have no idea what these sounds are, they comprise the sonic gateways through which we enter the erotic realm of egoistic fantasy. As grown-ups, we may find particular voices to be “sexy.” And we certainly tend to find the noises produced by lovers to be a crucial element of arousal. But intriguingly, such sounds have slipped further and further away from our collective consciousness in the twenty-first century, as the internet has absorbed and replaced previous media forms. Why is this? Twenty years ago, phone sex was a booming industry. Even women not blessed with conventional beauty could still earn a paycheck with a husky voice, a filthy mouth, and an instinctive understanding of the contours and limits of male fantasy. Whimpers, moans, sighs, cajolings, teasing, orders, descriptions, plottings, confirmations—all would flow from the mouthpiece to the ear in a collage of sonic elements, both linguistic and not, especially designed to heighten and sustain the onanistic spiral of desire. (After all, the longer the call, the higher the profit.) Today, phone sex is still an option, of course. But there is no longer the same kind of industry or business models underwriting it, since such whispered conversations are more likely to occur between lovers dealing with the tyranny of distance. True, these are making money for the phone companies in the process. But not for any business that is built on the waning explicit interest in pre-packaged forms of aural sex.
Given the erotic power and potential of sound, why are there so few places online that cater to the ear? Why has this hole in the market reopened, after the 1980s and 1990s filled it with saucy phone chat, and even erotic stories by cassette, delivered by mail? While conducting research for my new book, Sonic Intimacy: Voice, Species, Technics, I found very few options for those who may prefer to fantasize via the voice, rather than the usual scopophilic avenues. One promising initiative was called Porn for the Blind, which provides sound recordings for those who, for medical reasons, cannot look at X-rated websites. This turned out to be a kitchen-sink operation, however, and rather than creating original vignettes from scratch, they merely ripped the soundtracks from existing pornographic videos.
More recently, Pornhub—the free Netflix of smut—is trying to expand its consumer base to the blind by making sound recordings of women describing the content of their videos, while also throwing in a moan or an “oh yes” every now and again. But both of these examples, however well meaning, continue to privilege vision, by failing to conceive and create a form of arousal expressly designed to tantalize the imagination via the ear.
There are some DIY communities online pursuing this latter approach by recording their own fantasies as MP3 files, and then uploading them for others to listen to, essentially in the form of grassroots pornographic podcasts. (See especially the subreddit called “Gone Wild Audio.”) Some of these can be remarkably sophisticated, at least technically speaking, with multi-voice layerings, auditory special effects, and tags such as #edging, #creampie, #wetsounds, #older-woman, #sci-fi, #accent, #humor, #L-bomb, #jerk-off-instructions, and #binaural sound editing. And yet, a majority of these tend to reproduce the same kinds of scenarios we see in explicit videos, as if to reinforce the fact that the popular mind has been colonized by the overwhelming cultural desire to see whatever secret it is we try to find in pornography, rather than to hear it. (As French philosopher Jean Baudrillard once said, “Pornography tells us: ‘there must be good sex somewhere, since I am its caricature.’”) The most intriguing new emergence of sonic intimacy (albeit one which stridently—perhaps too stridently—denies its erotic aspect) is the ASMR community. These devotees are dedicated to stimulating the Autonomous Sensory Meridian Response: a physiological minigasm that apparently creates a gently lapping wave of pleasure and well-being in a subset of the population. Whether or not you are blessed to be part of this group, if you type “ASMR” into YouTube, you will find hundreds of thousands of videos—some with millions upon millions of hits—of men and women providing sonically soothing experiences, ranging from barely audible whispers, to soft clicks and pops of the tongue, the tapping of long nails on Formica tables, to the scratching of velvet pads, to the delicate crinkling of bubble-wrap. Some of these offerings come with role play scenarios, such as flight attendants, geishas, or professorial office hours. But the emphasis is squarely on acoustic experience, as listener-viewers chase the “low-level euphoria” of a cascade of tingles down the spine and across the skin.
Certainly, we are all susceptible to being seduced through sound. (Even the hearing impaired can enjoy certain vibrations; and scientists are now telling us that we “hear” in certain ways with our skin.) We know instinctively that music, for example, is an inherently erotic phenomenon; as Serge Gainsbourg knew only too well, in his garlicky Gallic way. We need not be those lonely or eccentric souls who develop an infatuation with Siri, or the GPS woman—like Joaquin Phoenix’s character in Spike Jonze’s film Her, who falls in love with the disembodied voice of Scarlett Johansson—to appreciate the erotic potential of the ear. Yet, in the near future at least, we are unlikely to see any serious challenge to the hegemony of the eye, given how many products and services are aimed at this organ. (Or rather, aimed at another organ, via the eye.) My own sense is that we would do well to nurture sonic intimacy in as many forms as possible, given the greater freedom this can create to pay a deeper attention to both the cultural and natural environment (if such a distinction still stands). Indeed, the very notion of “attention” stems from the word to attend, or to listen. The voice can be our most personal and intimate signature, even more recognizable than our face in some cases. And yet it can also be recorded and captured without our permission, and edited to say things we never meant to say. Which is simply to point out the uncanny fact that our voice does, and does not, belong to us. It is an enigmatic vibrational phenomenon, suspended between the anonymous biology of our larynx and the singular mirror of our psyche, animated by the breath that we borrow from the trees, and return in turn to the world, stitched with the fleeting sonic imprint of our own aspirations. (The word aspiration, as with inspiration, describes a mode of breathing.) Where has all the audio porn gone in the age of the internet? My wager is that such a question, which could be the beginning of a pitch to a Silicon Valley venture capitalist, may—if followed to its conclusion—actually help us rediscover the “impersonal intimacy” of the world’s many different voices, dreams, and desires. And in doing so, it may help us pay a different type of attention to the environment, and each other. Which in turn may just help us desire in less algorithmic, compromised, monetized, and destructive ways.
In 1964, the Polish science fiction writer Stanislaw Lem published a short story about a robot princess named Crystal. In the story, the robot knight Ferrix falls in love with Crystal, but Crystal spurns him. She has heard of an ancient non-robotic race of pale fleshly creatures, and claims that she will only marry one of their kind. Determined to win her, Ferrix dons an elaborate quasi-organic costume. He splashes mud and dirt onto his shiny metal carapace. He also learns to answer questions about the pale creatures. (“How do you reproduce?” “Stochastically.”) Meanwhile, a real human is brought before Crystal’s courtiers. To determine whom the princess will marry, the two challenge each other to a wrestling match. The human runs at Ferrix. When his fleshly body comes into contact with the (iron) knight, it bursts and splatters like a water balloon. Ferrix’s ferrous chest sheds its muddy disguise on impact.
As Crystal beholds the robot and the human carcass beside him, her desire to wed a human suddenly seems perverse—and clearly wrong. She and Ferrix are betrothed the following day. Lem suggests that a spoiled robot princess might enjoy the kink of having her robot lover cross-dress as human. But she could not seriously prefer a human over a robot. In a competition for her affection, the human inevitably loses.
In Lem’s original context, “Prince Ferrix and the Princess Crystal” carried a thinly veiled political message about the brutality of Soviet industrial progress, which aspired to turn human flesh into marble and iron. In our current context, what stands out is its prescience. Lem never expanded on the story at length. But over the last few years, this otherwise forgotten trope of female robots rejecting human suitors has returned, in films like Her (2013), Lucy (2014), and Ex Machina (2015).
Men used to wish that femme robots were smart enough to really fall in love with. Now they’re afraid of getting dumped. It’s Not You, It’s Me The fantasy of falling in love with a machine has a long history. It conventionally begins with the Greek myth of Pygmalion, the sculptor who made a sculpture so beautiful that he fell in love with it and which Venus, taking pity on him, brought to life. But the trope started to assume its current form in the final decades of the nineteenth century and the first decades of the twentieth.
Jacques Offenbach’s Tales of Hoffmann (1881) and Fritz Lang’s Metropolis (1927) both featured seductive mechanical women. These female machines were still not AIs proper: they mostly resembled creepy sex toys steered by male villains via remote control. And they weren’t especially lovable: a person could only be temporarily duped into falling for one of them, and only under certain conditions.
But over the course of the twentieth century, as cybernetics and early computer science developed—the term “artificial intelligence” itself was coined in 1956—falling in love with an AI became much more imaginable. In films like Blade Runner (1982), the process involved a certain amount of human condescension: overpowered by love, the male protagonists decide to overlook the metallic details of their beloveds’ anatomies and the occasional slowness of their electric circuits.
Later ventures, which developed around the growth of the internet—including The Matrix trilogy and Battlestar Galactica—began to depict humans and AIs as rival species who could only be reconciled by romance. In the final film of The Matrix, a computer consumes Neo into a vulva-like opening in its circuits so that both the virtual and real worlds can heal themselves. Androids and humans mate toward the end of Battlestar Galactica to produce offspring from whom modern humans are supposed to descend.
These stories were animated by technophobia: even when they had happy endings, they were driven by the fear of being overtaken by technology. In the past few years, however, a different plot has emerged. The new AI love stories aren’t about the fear of being replaced by robots. They’re about the fear of being rejected by them.
The trope has many branches and sub-branches, but it came into its own in a series of mainstream films: Her, Lucy, and Ex Machina. The moods of these films differ considerably, but they are made of similar parts. All three feature a strong female character of superhuman and artificial intelligence who has romantic and at times even sexual relations with human men, only to find that these men can’t satisfy them. After expressing their loss of interest, they disappear. In all three, the men who wanted to be with them are left heartbroken and helpless.
The reason these robotic women are incompatible with humans does not—as one might assume—have to do with anatomy. Rather, the mismatch is cognitive. In the course of all three films, AIs outstrip their human counterparts to the point where a romantic connection with a human ceases to be worth their while. Both Lucy and Her’s Samantha describe themselves as having “evolved” to communicate faster and along multiple channels: interactions focused on only one human aren’t enough for them anymore. Ex Machina’s Ava does not even bother to explain herself to her suitor, the young programmer Caleb—as it turns out, humans are only interesting to her as data sets, not as possible romantic partners.
These bad AI romances don’t offer a coherent social critique. Instead, they emphasize the disappointment that men feel when they get rejected by robots. The point isn’t simply that computers can be smart. It’s that people can really fall in love with them, and be just as badly hurt by their indifference as they would if they were human.
The Success Daughter as Fembot Cultural historians have long recognized that stories like Metropolis reflected early twentieth century anxieties about industrialization, mass society, and mass death. So why did the bad AI romance genre emerge when it did? In a word: the Mancession. The genre appeared at a time of rising anxiety about male uselessness and female ascendancy. According to this narrative, men are being rendered superfluous by an economy that no longer needs them while women, empowered by the boardroom feminism of Sheryl Sandberg, are scaling the corporate ladder and displacing their male counterparts.
Her, Lucy, and Ex Machina all play on this narrative. All three are films about women who no longer need men as protectors or breadwinners, and who are far too smart for their male suitors. All three are infused with a spirit of paranoid misogyny about excessively accomplished, independent women whose sexual appetites have therefore become inscrutable.
All three films also tie their protagonists’ romantic disappointments to standard critiques of the capitalist economy. They evince a deep suspicion, even a hatred, of the businessmen and technocrats who instrumentalize and monetize our desires. In Ex Machina, Ava is the creation of a Steve Jobs-like billionaire genius, who forges her mind out of the personal data he siphons off the web with his many search engines and apps, and exploits both her and his employee Caleb for his personal pleasure. Samantha is the bestselling product of a similar genius-run company. Lucy is a mule for illegal synthetic drugs.
But despite the misogynistic and Marxist undertones of these films, powerful women and capitalists are not their main targets, or the primary sources of the fears they express. Beyond mutual objectification and exploitation, all three foreground an idealized kind of intimacy. AIs dump the films’ male protagonists after having gotten to know them to an unfathomably high degree. Their human lovers do not feel disregarded or oversimplified by them; rather, they feel profoundly understood, with a level of detail and precision usually found only between the “soulmates” of young adult fiction.
The artificial intelligence these films depict is an atmospheric, ether-like network of data gathering and analysis. With these enhanced capacities for connection and surveillance, AIs penetrate the minds and feelings of the men they date much more perceptively and quickly than these men expect. In Ex Machina, Ava turns out to have been fed much of Caleb’s personal information even before he met her. In Her, Samantha and Theodore see the world through each other’s eyes. Lucy’s insight into humanity eventually spans not only individuals, but the whole history of human evolution. “I am everywhere,” is the message she leaves to her suitor after melting into thin air before his eyes.
As these films’ AIs abandon their monogamous human lovers in favor of mass polyamory with whole human populations, they reveal that they have their own expectations from relationships. Their desires are different from those of their human lovers. Ava wants to see cities; Samantha wants someone who can talk metaphysics with her; Lucy wants to meet the other Lucy, the original form of our species.
Their aspirations for self-fulfillment are less Equity than La La Land: these AIs are chasing their dreams, and their freedom, with near-saccharine earnestness. But the female robots aren’t the ones yearning for a happily-ever-after romance—it’s the male leads who are left broken-hearted because of the banality of their desires. The men want the conventional love story, but the women are too ambitious to be tied down. They want something bigger. Like Lisbeth, the hacker protagonist of The Girl with the Dragon Tattoo, they are driven not only by misandry but also by curiosity and idealism.
Don’t Leave Me Whether or not we acknowledge it, our phones and laptops have access to us that’s just as close and unfiltered as a lover’s. Closer, even. What our relationships to them enact, and perhaps therefore amount to, is an intimacy whose loss would leave us feeling humiliatingly, and comically, abandoned and betrayed. These films put the affect back into the otherwise merely technical description of our devices as forms of enhanced connectivity.
Most eerily perhaps, they also propose that fears of abandonment and betrayal are inevitable in our current technological context: not because our computers are less complicated than us, but because their networks of images and sounds so greatly exceed our cognitive capacities and individual contributions. The internet will always know us better than we know the internet. It will also never depend as much on our individual existence as we do on its presence, even though—when we sit down to open a browser—it is so extraordinarily responsive to our every half-typed wish. We might know this in the abstract, but we will still mistake our iPhones for our lovers as long as we rely on them the way we do—even if we keep insisting that they’re not our type.
Technology has already transformed so many aspects of our lives. Now it’s begun to transform our sex lives, through the emerging field of “sextech.” What is sextech? In short: it’s an industry that merges human sexuality and technology. Sextech enterpreneur Cindy Gallup describes it this way: Sextech is important because sex and sexuality lie at the heart of everything we are and everything we do… No other area of human existence is hedged around with so much shame, embarrassment, guilt and self-torment. How fundamentally important sexuality is to us, combined with how fundamentally conflicted we are about it, makes it the richest possible territory for advances and breakthroughs using technology to disrupt and enhance our experience of sex.
For Gallop, sexuality is especially fertile ground for technological disruption because of our “conflicted attitude” towards it. Sexuality “informs our relationships, our lives, our happiness”—yet our culture continues to have a tormented relationship with it. Gallop believes that technology can help resolve this conflict, and empower us to “openly discuss, address, solve for and improve sexual issues.” Certainly, the need for a less tortured approach to sexuality is especially urgent now. The “shame, embarrassment, guilt and self-torment” that Gallup describes has only intensified in recent years, as conservatives continue their anti-sex crusade. We’re seeing more pushback against inclusive, comprehensive, and pleasure-based sexual education for school-aged children and adults, in favor of abstinence-only curricula that barely begin to scratch the surface of what individuals really want to know about sexuality. Consent, communication, and relationship-building skills are crucial parts of sex education, but are often overlooked.
Sextech may help solve the sex-ed crisis, but its possible applications are much broader. Exploring human sexuality through technology can take a variety of forms, from developing apps for ovulation tracking to adding digital features to sex toys in order to increase pleasure and create stronger connections for long-term (and long-distance) romantic partners. Indeed, part of what makes sextech so interesting is its almost limitless potential. Gallop calls sex “the universal human use-case,” and claims that sextech could be “the biggest technology market of them all, and therefore potentially far and away the most lucrative.” Sextech is the perfect business, in other words—as infinite, innovative, and inexhaustible as human sexuality itself.
Dangerous dongs But as sextech grows in popularity, so does the need for consumers to be aware of its potential dangers. After all, sextech involves fusing technology with the most intimate parts of people’s lives—abuse and exploitation are real concerns. And sextech is still such a new field that there are few regulations or guidelines for the industry to follow.
In fact, the industry is currently reaching a critical moment over rising concern about encryption. Encryption can apply in two situations. The first is when a user accesses something passive, like porn. The second is when two or more people are having an interaction—either virtual or physical—that they don’t want to be observed by others. In either case, encryption is an important consideration, since sextech can generate sensitive data about users’ sexuality that they will want to protect. This data may be vulnerable to corporate or government surveillance, as well as to capture by malicious actors who want to pursue blackmail or “revenge porn”-style retribution.
The biggest sextech scandal to date came in 2016, when users of the We-Vibe, a Bluetooth-enabled vibrator, filed a lawsuit alleging that the device was collecting extensive amounts of usage data. This data included how often users used the We-Vibe and for how long, as well as the vibrator’s settings, temperature, and battery life. Further, the lawsuit claimed that the company was personalizing the information by linking it to customer email addresses. According to the lawsuit, We-Vibe’s parent company, Standard Innovation, obtained this information without users’ permission, in violation of the law. In March 2017, the makers of the We-Vibe reached a $3.75 million class action settlement with users.
The controversy has sparked a much-needed conversation about the need for encryption in sextech, and for greater consumer awareness more broadly. In the absence of government regulation or a single industry standard, the burden of keeping sextech data safe currently falls on the shoulders of consumers themselves. So how can consumers use encryption to protect themselves and their vulnerable data? For Kyle Machulis, an encryption specialist for sextech products, the issue with sextech and encryption is that the two are often at odds with each other. “It’s much like creating the framework without addressing the current needs,” he says. Sextech is designed to allow individuals to experience sexual pleasure digitally—and safety is largely an afterthought.
“RenderMan” is the founder of Internet Of Dongs, a site devoted to “hacking” sextech devices and documenting their security vulnerabilities. He believes that sextech and encryption should be synonymous. “People [do] expect a certain level of privacy using these products,” he says. Unfortunately, both he and Machulis agree that there aren’t many steps that consumers can take to protect their privacy at this point. Instead, they suggest that consumers become more aware of how to safeguard their data online generally—and try to apply those lessons to sextech.
“Consumers should be thinking about what info is being generated and sent, and ask yourself if you’re comfortable with it,” RenderMan says. Machulis advises a similar approach: “Anytime anything is sent over a network, it can be compromised. Ask yourself, ‘Would I be okay with losing this information?’” He also advises “investing in products that have been verified safely.” Internet of Dongs is an essential resource for evaluating the safety of sextech products—“really the first in the field trying to bring out the best for sex tech consumers,” says Machulis. “Hopefully, companies and vendors begin to take note and follow suit.” Without pressure from consumers, it’s unlikely that sextech companies will invest in the expertise needed to secure their products. They’re certainly not doing it now. “They don’t have anyone knowledgeable on staff as far as I can tell at most vendors,” RenderMan observes. More broadly, the engineering practices of sextech remain fairly opaque: even the question of “what coding language is being used” is a tricky one to answer, notes RenderMan.
We still have much to learn about what sextech is capable of, but one thing is certain: consumer safety is crucial. If sextech is to fulfill its potential, it has to gain our trust by ensuring the privacy of our digital sex lives. SEEKING OXYTOCIN AGONIST. Are you the missing compound in my nootropic stack? Aspiring post-human seeks same for pre-post-upload companionship.
TECHNICAL COFOUNDER NEEDED FOR “HEART”-UP. Solo sapiosexual male seeks front-end engineer (woman) to add as a private collaborator on repository of love. Framework agnostic polyglots a huge plus. Monogamy is technical debt. LONELY PACKET LOOKING FOR A PORT. Will you be my gateway? Looking for a 200, but 420 friendly.
LOOKING FOR A STUDY BUDDY TO DRILL ME as I prepare for coding interviews. Extra credit if your stack pops from both ends. I will balance your lopsided binary tree until you fizzbuzz all over my whiteboard. FEMME FRONT END FOR PANSEXUAL CYBORG seeks console cowgirl for context-free play. Tattoos a plus.
LET’S DISMANTLE THE KYRIARCHY! Cis-passing AMAB, white, DevOps Dom & bacon fanatic. Working on unpacking my invisible knapsack. Strong ally & vocal on Twitter. Learning shibari, looking for sub/brat woke individuals on the femme side. Holding a ladle when you’re out of spoons. LAST IN FIRST OUT. Front-end specialist looking for a tight back end. Let’s have a bit of push and pop on a stack of your choosing until we both overflow. Platform independent.
1. The internet was supposed to save the world. What happened? The time is out of joint. The president is unhinged. Misaligned, our civilization approaches its breaking point. Crises of all kinds—ecological, nuclear, social—threaten the final crack-up. And the internet, once seen as our savior, looks more and more like a destroyer, deranging the structures that keep our society intact.
Since the internet became mainstream in the 1990s, we’ve been told it would take us to utopia. The digital economy would transcend analog laws and limits, and grow forever on the fuel of pure thought. The digital polity would make us more engaged, and produce more transparent and responsive governments. As individuals, we could expect our digital devices and platforms to make us happier and healthier, more open and connected.
For decades, these promises seemed plausible. At least, most of the media thought so, as did most of our political class and the general public. In the past year, however, the consensus has shifted. Digital utopianism suddenly looks ridiculous. The old dot-com evangelists have begun to lose their flock. The mood has darkened. Nazis, bots, trolls, fake news, data mining—this is what we talk about when we talk about the internet now.
As the pendulum swings, it’s worth stopping to take a breath. Worshipping the internet was always absurd. Demonizing it is equally misguided. “Technology is neither good nor bad; nor is it neutral,” the historian Melvin Kranzberg once said. Therefore, as the sociologist Angèle Christin writes in this issue, “We shouldn’t merely invert the Silicon Valley mantra that technology provides the solution for every problem, to arrive at the argument that technology can’t solve any problem.” Techno-utopianism isn’t the answer, in other words. Neither is techno-dystopianism. The internet once embodied our hopes for a harmonious future. Now it offers a convenient punching bag for our despair about the present. But technology doesn’t automatically generate justice or injustice. The outcomes it generates depend on who owns the machines, and how they’re engineered.
Utopia may never arrive. But technology can make the world more just—if we find the right ways to organize and operate it. 2. Language is a legacy system. And in the language we have inherited, justice is a technical concept. A just world would be a world well made. “Fair” means both fair and beautiful. The word that the word “justice” comes from means “straight.” We still justify margins. If a shirt or skirt rides up, or a picture frame tilts down, we adjust them. To “make things right” means, literally, setting their edges at ninety-degree angles.
Jesus’s day job was carpentry—until he became a full-time joiner of men. Would he have been a programmer today? This issue includes pieces by and about people who are trying to build new technologies, or use existing ones, to rectify our broken social systems. It also includes people who are being disciplined and punished by technologies—from robo-debt software to racist search engines. And it offers strategies for resistance, whether through little hacks or all-out mutiny.
Justice may have a technical component but injustice has no purely technical solution. Making the world right isn’t merely a matter of making tweaks, or finding the one elegant algorithm that will refactor the spaghetti code of society. It might be comforting to imagine that we can fix our problems technocratically—especially if you have an engineering sensibility, or a lot to lose.
Any technologist wants to make things that work. But the key questions are works for what? And, perhaps even more to the point, for whom? 3. Justice, like Love, is supposed to be blind. The statues in front of courthouses show a goddess holding a set of scales out, with a piece of cloth over her eyes. The point is that the law should apply to everyone equally. Justice can’t see who’s rich or powerful. Her blindness fosters a deeper kind of insight.
Many of the issues that our contributors explore in the following pages come down to visibility. One piece investigates how black faces are seen (and not) by police software used to lock them away; another, how indigenous communities are deploying drones to force governments to acknowledge their land claims.
Democracy depends on self-representation: our ability to oversee those in power and to make ourselves seen. Most people are invisible in our political system. But the forces that oppress them are becoming increasingly obvious. The way the internet organizes knowledge—not by silo but by hyperlinks and hashtags—helps us recognize how everything is connected. It reveals not a series of isolated wrongs but a pattern with deeper roots.
It is always tempting to look at injustice and call it natural. It is how it is. Boys will be boys. Nature is a comforting concept to those in power, because nature is what you get to take for free. Natural is what you call a situation you don’t want to change—either because you feel helpless to do so or because you are its beneficiary.
People in power love to tell us that there is no alternative. But there are, in fact, many alternatives. The obstacles to human flourishing aren’t inevitable. They’re not eternal facts of life—they’re produced by the specific ways we organize our society. And we can organize our society differently.
With the fires burning and flood tides rising and nuclear war one tweet away, more and more people seem to realize that we need to—and fast. But to reorganize our world the right way will require a new moral vision. We have inherited a particular set of metrics that guide how we build and implement technologies: clicks, downloads, conversion—which all, in the end, roll up to profit. But what if we optimized for different outcomes: sustaining the earth, empowering all who live on it, enlarging the horizon of human possibility? Close your eyes. What does Justice see? Let’s start with the idea that technology is always a force for good. This strain of thought is pervasive in Silicon Valley. Where does it come from? What are its origins? It owes its origins to 1960s communalism. A brief primer on the counterculture: there were actually two countercultures. One, the New Left, did politics to change politics. It was very much focused on institutions, and not really afraid of hierarchy.
The other—and this is where the tech world gets its mojo—is what I’ve called the New Communalists. Between 1966 and 1973, we had the largest wave of commune building in American history. These people were involved in turning away from politics, away from bureaucracy, and toward a world in which they could change their consciousness. They believed small-scale technologies would help them do that. They wanted to change the world by creating new tools for consciousness transformation.
This is the tradition that drives claims by companies like Google and Facebook that they are making the world a better place by connecting people. It’s a kind of connectionist politics. Like the New Communalists, they are imagining a world that’s completely leveled, in which hierarchy has been dissolved. They’re imagining a world that’s fundamentally without politics.
It’s worth pointing out that this tradition, at least in the communes, has a terrible legacy. The communes were, ironically, extraordinarily conservative. When you take away bureaucracy and hierarchy and politics, you take away the ability to negotiate the distribution of resources on explicit terms. And you replace it with charisma, with cool, with shared but unspoken perceptions of power. You replace it with the cultural forces that guide our behavior in the absence of rules.
So suddenly you get these charismatic men running communes—and women in the back having babies and putting bleach in the water to keep people from getting sick. Many of the communes of the 1960s were among the most racially segregated, heteronormative, and authoritarian spaces I’ve ever looked at. But how were computers in particular supposed to create a world without bureaucracy or hierarchy or politics? How was information technology going to facilitate the kinds of transformations the New Communalists were looking for? So the New Communalists failed, in a big way. By 1973, virtually all of the communes had disappeared or dissolved.
Through the 1970s and into the early 1980s, most of the folks who used to be on the communes are still in the Bay Area. And the tech world is bubbling up around them. They need work, so many of them start working in the tech world. The folks associated with the commune movement—particularly Stewart Brand and the people formerly associated with the Whole Earth Catalog—begin to reimagine computers as the tools of countercultural change that they couldn’t make work in the 1960s.
Stewart Brand actually calls computers “the new LSD.” The fantasy is that they will be tools for the transformation of consciousness—that now, finally, we’ll be able to do with the computer what we couldn’t do with LSD and communes. We’ll be able to connect people through online systems and build new infrastructure around them.
Do you think this techno-utopian tradition runs as deep in the tech industry today as it did in the past? It varies depending on the company. Apple is, in some ways, very cynical. It markets utopian ideas all the time. It markets its products as tools of utopian transformation in a countercultural vein. It has co-opted a series of the emblems of the counterculture, starting as soon as the company was founded.
At other companies, I think it’s very sincere. I’ve spent a lot of time at Facebook lately, and I think they sincerely want to build what Mark Zuckerberg calls a more connected world. Whether their practice matches their beliefs, I don’t know. About ten years back, I spent a lot of time inside Google. What I saw there was an interesting loop. It started with, “Don’t be evil.” So then the question became, “Okay, what’s good?” Well, information is good. Information empowers people. So providing information is good. Okay, great. Who provides information? Oh, right: Google provides information. So you end up in this loop where what’s good for people is what’s good for Google, and vice versa. And that is a challenging space to live in.
I think the impulse to save the world is quite sincere. But people get the impulse to save the world and the impulse to do well for the company a bit tangled up with each other. Of course, that’s an old Protestant tradition. What about techno-utopianism outside of these companies? Do you think it’s as strong as it’s been in the past? Back in the 1990s, the idea that technology was a force for good enjoyed broad mainstream appeal. I’m thinking of Al Gore, Wired, the hype around the dot-com boom and the “New Economy.” Today, that narrative hasn’t disappeared—especially within Silicon Valley. But overall, the mood of the national conversation has become more skeptical. There’s more talk about the dark side of technology: surveillance, data mining, facial recognition software, “fake news,” and so on. We’ve seen more resistance to the basic utopian line. Where do you think that comes from? I think you can track it directly to the Snowden revelations.
I’ve taught a course every year for fifteen years called Digital Media in Society. And when I started teaching the course in 2003, my students were always like, “Oh Turner, he’s so negative. It would be such a better course if you would just read Apple’s website.” And then more recently, it’s like, “Oh Turner, he’s so positive. What’s his problem?” The turning point was Snowden. In terms of the public conversation, Snowden is when people became aware of surveillance and began to see it as a problem.
The other thing to say about the utopian idea is that it lives in the Valley partly as a marketing strategy. This is a political operation of the first importance. If the Valley can convince Washington that the Valley is the home of the future and that its leaders see things that leaders back in stuffy old DC can’t see, then they can also make a case for being deregulated.
Right. Why regulate the future? Who wants to do that? So, it’s very tactical. Claiming the high ground of the utopian future is a very tactical claim. It seems that tech companies also prefer the deregulatory approach when it comes to what content to allow on their platforms. Their default is laissez-faire—to not interfere with what people can post. Where does that attitude come from? I see the laissez-faire attitude as rooted in engineering culture and rewarded by business. Some people see it as a very calculating business decision. I think there’s an element of that—certainly it’s rewarded—but I see something deeper going on.
Engineering culture is about making the product. If you make the product work, that’s all you’ve got to do to fulfill the ethical warrant of your profession. The ethics of engineering are an ethics of: Does it work? If you make something that works, you’ve done the ethical thing. It’s up to other people to figure out the social mission for your object. It’s like the famous line from the Tom Lehrer song: “‘Once the rockets are up, who cares where they come down? That’s not my department,’ says Wernher von Braun.” So I think that engineers, at Facebook and other firms, have been a bit baffled when they’ve been told that the systems they’ve built—systems that are clearly working very well and whose effectiveness is measured by the profits they generate, so everything looks ethical and “good” in the Google sense—are corrupting the public sphere. And that they’re not just engineers building new infrastructures—they’re media people.
Several years ago, I spent a lot of time around Google engineers who were connected to the journalism enterprise early on. They had a robust language around information control and management. When the conversation shifted to news, however, they had no idea what the conversation was about. News was something different.
Engineering-based firms that are in fact media firms like Facebook are really struggling to develop new ethical terms for managing the encounter they’re having. I give them the benefit of the doubt. I think they are sincerely trying to deploy the ethical frameworks that they have from engineering. And they are sincerely baffled when they don’t work.
What are those ethical frameworks? Engineers try to do politics by changing infrastructure. That’s what they do. They tweak infrastructure. It’s a little bit like an ancient Roman trying to shape public debate by reconfiguring the Forum. “We’ll have seven new entrances instead of six, and the debate will change.” The engineering world doesn’t have a conception of how to intervene in debate that isn’t infrastructural.
Let’s switch gears a bit back to history. One of the things that I loved about your book From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism was its very measured perspective. Thanks. I worked really hard at that. I took some lumps inside the left academic world where I live for being too nice to Stewart Brand.
You seem to have a certain affection—maybe affection is too strong a word, but certainly an appreciation—for the tradition that’s identified with Stewart Brand, but which also has earlier antecedents like Norbert Wiener and others. But today, techno-utopianism—for lack of a better word—seems pretty hollowed out. It’s been weaponized by these big companies to sell products and push their agenda. It’s hard not to feel cynical about its rhetoric.
So my question is: Is there any hope for techno-utopianism? Can we salvage a piece of that original vision, or is it a line of thinking that we should try to move on from? Any utopianism tends to be a totalizing system. It promises a total solution to problems that are always piecemeal. So the problem from my perspective isn’t the technological part of technological utopianism but the utopianism part.
Any whole-system approach doesn’t work. What I would recommend is not that we abandon technology, but that we deal with it as an integrated part of our world, and that we engage it the same way that we engage the highway system, the architecture that supports our buildings, or the way we organize hospitals.
The technologies that we’ve developed are infrastructures. We don’t have a language yet for infrastructure as politics. And enough magic still clings to the devices that people are very reluctant to start thinking about them as ordinary as tarmac. But we need to start thinking about them as ordinary as tarmac. And we need to develop institutional settings for thinking about how we want to make our traffic laws. To the extent that technologies enable new collaborations and new communities, more power to them. But let’s be thoughtful about how they function.
Utopianism, as a whole, is not a helpful approach. Optimism is helpful. But optimism can be partial: it allows room for distress and dismay, it allows room for difference. It’s not, as they used to say in the 1960s, all one all the time. What are the “politics of infrastructure”? What does that phrase mean? It means several different things. First, it involves the recognition that the built environment, whether it’s built out of tarmac or concrete or code, has political effects. I was joking earlier about reshaping the Forum, but I shouldn’t have joked quite so much, because the fact that the Forum was round encouraged one kind of debate.
Think about an auditorium where someone sits onstage and the audience watches, versus a Quaker meeting where everyone sits in a circle. They’re very different. So, structure matters. Design is absolutely critical. Design is the process by which the politics of one world become the constraints on another. How are those constraints built? What are its effects on political life? To study the politics of infrastructure is to study the political ideas that get built into the design process, and the infrastructure’s impact on the political possibilities of the communities that engage it.
The Electronic Frontier One of the most visible emblems of the techno-utopian tradition is Burning Man. You wrote a great article called “Burning Man at Google” about what the festival means for Silicon Valley. I’m never going back. I’ve been three times. I’m done. What are some of the social practices and cultural institutions around the tech industry that come to life at Burning Man? Burning Man is to the tech world what the nineteenth-century Protestant church was to the factory.
In the nineteenth century, if you lived in a small factory town, you’d work six days a week through Saturday. Then on Sunday, you’d go to church, and the bosses would sit up front, the middle managers would sit right behind them, and all the workers would sit in the back. You’d literally rehearse the order of the factory. You’d show, in the church, how you oriented all of your labor toward the glory of God.
At Burning Man, what you’re rehearsing is project-based collaborative labor. Engineers flowing in from the Valley are literally acting out the social structures on which Valley engineering depends. But they can do something at Burning Man that they can’t do in the Valley: they can own the project. They can experience total “flow” with a team of their own choosing. In the desert, in weirdly perfect conditions, they can do what the firm promises them but can’t quite deliver.
The Valley’s utopian promise is: Come here and build the future with other like-minded folks. Dissolve yourself into the project and emerge having saved the future. Well, at Burning Man, you can actually do that. You pick your team, you make a work of art, people admire your art, and you are in a self-described utopian community that, at least for that moment, models an alternative future.
So Burning Man is a way to fulfill the promise that Silicon Valley makes but can’t keep. Burning Man is the very model of the Puritan ideal. What did the Puritans want? The Puritans, when they came to America, imagined that they would be under the eye of God. They imagined they would build a city on a hill. “The eyes of all people are upon us,” John Winthrop said.
When I went to Burning Man, that’s what struck me: I am in the desert. The desert of Israel, from the Bible, under the eye of heaven, and everything I do shall be meaningful. That’s a Protestant idea, a Puritan idea, a tech idea, and a commune idea. All of those come together at Burning Man and that’s one of the reasons I’m fascinated by the place.
Burning Man has many problems, of course, and I am distressed by many pieces of it. However, there was a moment I had during my first visit when I went two miles out in the desert and I looked back at the city and there was a sign that looked just like a gas station sign and it was turning, the way gas station signs do. It could’ve been a Gulf or Citgo sign, but it wasn’t. It was a giant pink heart. And for just a moment, I got to imagine that my suburbs back in Silicon Valley were ruled over not by Gulf and Citgo, but by love.
That’s a thread running through Burning Man. And it’s a thread that I treasure. In the midst of all the other things that made me crazy. Burning Man also seems to embody Silicon Valley’s fascination with the idea of the frontier. You mentioned John Winthrop, and in From Counterculture to Cyberculture, you discuss John Perry Barlow and Kevin Kelly and the other folks who popularized the notion of the internet as an “electronic frontier.” It certainly became a very popular metaphor in the 1990s—but how do you think it’s aged? Would it be fair to say that the electronic frontier has “closed” like the physical American one did in 1890—or was it never a satisfying metaphor to begin with? The first thing to know about that metaphor is that it comes not only from deep American history but very specifically from the Kennedy era.
After World War II, we transform from being a bush-league country that doesn’t even have a unified highway system yet into a place that has enough abundance, enough money, and enough technology to do things like send hippies out across the country in VW buses for two years to make movies. That’s a big transformation. On the industrial and intellectual side, people like John F. Kennedy begin talking about the “New Frontier.” They promote the idea that space will be the new frontier, that technology will be the new frontier, that science will be the new frontier. And the technical world in particular becomes preoccupied with that. Those folks from the 1990s you mention are children of that world.
One of the great myths of the counterculture is that it wasn’t engaged with the military-industrial complex. That’s true of the New Left—but it’s not true of the New Communalists. The communalists were engaged with cybernetics in a big way. They bought deeply into the hope that through LSD, they would attend to new psychological frontiers and build new social frontiers.
Today, the American rhetoric of a new frontier has disappeared. Trump is about making America great again in his retrograde, macho, pseudo-fascist kind of way. Nobody thinks they live on a frontier anymore. However, inside the tech world, there are still people microdosing with LSD. There are still people experimenting with polyamorous relationships. There are still people pursuing the intersection of consciousness change and new social structures. And those worlds are still quite tightly intertwined with the legacy of the counterculture. So although the language of the new frontier has gone, and the frontier itself has been closed off by surveillance and commerce, people who work within tech are still treating their lives as if they were frontier settlers. And that’s fascinating to watch.
The other aspect of the frontier metaphor is its libertarian politics. There’s always been a libertarian core to the techno-utopian tradition. It seems to come out of the anti-institutional ethos of the counterculture in the 1960s and 1970s, and then morphs into a kind of hippie Reaganism in the 1980s and 1990s. That’s how you get Wired running these flattering pieces on Newt Gingrich in the 1990s.
Oh, it’s so horrifying. And that’s why everyone thinks the tech industry is full of libertarians. But there’s also a sizable constituency of workers in tech with very different politics—people who identify as leftists or socialists. After all, a lot of tech workers supported Bernie during the Democratic primaries. Do you think new political space has opened up in the industry recently? Or was the industry always more politically diverse than its reputation? That wing has always been there. One of the things I’ve been trying to figure out is whether it’s changed more recently. The answer to the question can be found, more or less, in something called the Silicon Valley Index, which is a wonderful demographic study of the Valley. It’s been done for about fifteen years, and what it suggests is that the politics of the Valley have held constant—which surprises me. It has been a liberal, left-leaning, Democratic region as a whole pretty steadily for fifteen years.
But the people who get most of the attention in the Valley are the big CEOs. I think that the vision of the Valley as a libertarian space is a combination of actual libertarian beliefs held by people like Peter Thiel and a celebration of libertarian ideals by an East Coast press that wants to elevate inventor types. Steve Jobs is the most famous. East Coast journalists want to rejuvenate the American hero myth—and they’re going to find a world to do it in.
In order to make these heroes, however, they have to cut them off from the context that produced them. They can’t tell a context story. They can’t tell a structure story. They have to tell a hero story. Suddenly the heroes themselves look like solo actors who pushed away the world to become the libertarian ideal of an Ayn Rand novel. So I think it’s a collaboration between actually existing tech leaders and the press around a myth.
That really resonates with how the press covers someone like Elon Musk. Exactly: Elon Musk is the classic example. And I actually really admire Elon Musk. I should say that one of my principles for working on Silicon Valley has been to take people at their word. The first news story I ever did when I was a journalist was about a guy who bilked widows out of their houses. My job was to figure out how he did it. So I spent all afternoon with him. He was a totally charming man. He didn’t lie to me. He told me exactly how he did it. I reported the story and I got two kinds of letters. One kind of letter said, “You finally busted the prick. You nailed him.” The other kind of letter was written by his friends. I was sure they were going to hate me. But they said, “You finally showed the world what a great businessman he is.” As we try to figure out Silicon Valley, I think it’s important to pull back a bit and try to see it from both sides. That can be tough if you have stakes in the debate. But it also gives you more room to see the whole world.
I also wonder whether one of the reasons that tech CEOs dominate the media narrative is that the ubiquity of nondisclosure agreements (NDAs) make it very hard for rank-and-file tech workers to have a public voice. One of the ironies of the Valley is that the NDAs do prevent the transmission of stories from the Valley to Washington, New York, Boston, and elsewhere. But within the Valley, everybody knows everybody, more or less, so the NDA doesn’t apply.