id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
100 | 2,020 | "Myx Plus In-Home Fitness System Review: Great Workouts, Great Bargain | WIRED" | "https://www.wired.com/review/myx-plus-in-home-fitness-system" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Billy Brown Gear Review: Myx Plus In-Home Fitness System Facebook X Email Save Story Photograph: Myx Fitness Facebook X Email Save Story $1,499 at Myx If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED Rating: 8/10 Open rating explainer More and more people are looking for ways to work out at home these days. But you can only do so many push-ups and air squats, and running outside becomes a lot less appealing when the weather starts going downhill.
The Connecticut company Myx Fitness is looking to create the feel of a real gym inside people’s homes with its Myx Plus home studio. The fitness system is made up of a professional-grade Star Trac stationary bike with an integrated 21.5-inch touchscreen tablet, plus some extras like a seven-piece weight set, a foam roller, a resistance band, and a Polar heart rate monitor that tracks your level of exertion.
If you pony up a $29 monthly subscription fee, Myx will also stream video workouts to that giant, swiveling tablet. The on-demand workouts guide you through prerecorded training sessions led by a roster of coaches. The service customizes your workouts to your fitness level and tracks your progress over time.
The bike arrives fully set up. When I got my test model, a technician dropped it off, moved it into my spare room, and walked me through the initial setup. As soon as he left, I set up my profile and, as instructed, went through my first workout: the Myx Assessment ride, which takes you through a workout and measures your heart rate as you reach different levels of exertion. The system then calculates your ideal heart rate range for your future workouts.
Photograph: Regina Nicolardi/Myx Fitness Twenty minutes and 282 calories later, I was dripping sweat and tired, but not quite exhausted. After the workout, I was given my Myx score. Myx uses this to figure out your three heart rate zones (blue for easy work, green for moderate work, and red for high exertion) to make sure that you’re not over- or under-exerting yourself in any given workout. The score is adjusted after each evaluation, and Myx recommends you do a new assessment ride every six weeks.
The bike is only part of the Myx Plus package. The weights that come with it are standard rubber-coated dumbbells and a kettlebell, all well built with a high-end fitness studio look that makes it possible to keep them in your house without giving it that prison-yard weight bench look. The heaviest set of weights consists of 9-, 12-, and 15-pound dumbbells and a 25-pound kettlebell, which is a bit light if you’re used to running the rack at Planet Fitness, but that kettlebell had me breathing hard during some of the floor workouts. It’s a great starter set, and you can buy your own heavier dumbbells later when you need to bump up the intensity.
In the first week of workouts, I had to restart the tablet several times to reboot it after the screen froze while it was searching for my Wi-Fi, but that hasn’t been an issue since the first few days. Otherwise, the tablet is excellent. At 21.5 inches, it’s not so big that you’re overwhelmed while watching it up close wjen riding the bike, but it’s also big enough to use for floor workouts several feet away. The touchscreen swivels 360 degrees for off-the-bike sessions.
Myx Myx Plus Rating: 8/10 $1,499 at Myx If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED But obviously, the star of the show is that stationary bike. It’s a heavy piece of machinery, and it has a solid feel that is a welcome feature when you get on it and start pedaling. There’s no shaking, no wobbling, just a grounded, secure feeling you’d get in that bike at the gym that everyone rushes to get to first—which makes sense, as you’ll often see rows of Star Trac bikes in commercial gyms. The bike is also nearly silent, which is great for people who live with light sleepers or share walls with fussy neighbors.
Using touchscreen controls on the tablet, I connected the Myx system directly to my Bluetooth earbuds and the included Polar heart rate monitor. For me, each connection was quick and reliable. Bluetooth devices can be finicky (I’ve often stood at a trailhead, waiting impatiently for my earbuds to connect), so it’s great to be ready to ride within seconds.
When I went into the database to check out Myx’s workouts, my first impression was, “Wow, this is way more than just a bike.” The hundreds of preprogrammed workouts available span four categories: Bike (which contains the traditional spin/cycle type workouts), Floor (workouts using equipment and/or bodyweight to build strength and endurance), Recovery (yoga, meditation, and recovery movements), and Cross-Train (workouts that contain mixes of all three).
Photograph: Myx Fitness In each category are dozens of workouts in three different difficulty levels, led by coaches that hit just the right tone—positive without being saccharine. The pretaped training sessions range in time from five minutes to 60 minutes, and they have the feel of a one-on-one experience. That’s a huge perk—Myx focuses on positive reinforcement to motivate you, so you won’t find the merit-based leaderboards like those in Peloton workouts or indoor cycling apps like Zwift. There’s less of a competitive feel; it’s more like a really fit friend is encouraging you through a workout.
As a certified trainer (CrossFit L1, L2, USPA Powerlifting, Precision Nutrition L1) with experience programming workouts and owning a gym, I’m immediately skeptical of any programming that I haven’t done myself. But Myx has done its homework: Every workout I’ve done so far has hit that perfect zone of difficulty, taking you right up to the “I can’t do this” line and easing it back just a bit so you can keep going. I’ve left every workout tired and sweaty, but invigorated, rather than ready for a breakfast burrito and a nap, which is my usual state after a rough gym session. I was pleasantly surprised to find higher level CrossFit movements presented in a way that beginners could understand. I even saw some movements that I haven’t come across in my programming experience.
One of my favorite features is the Myx Media feature, which forgoes programming and lets you ride the bike at your own pace while simulating an outdoor ride in scenic locations. (I was excited to see familiar trails and locations on the San Francisco ride.) You can also choose to watch Myx original content like motivational video diaries from the coaches, or my favorite, watch the Newsy app while you ride. During my testing, it became a habit to wake up and start my day with a 20-minute ride while catching up on current events, then close out the day with a programmed workout after dinner.
Myx Myx Plus Rating: 8/10 $1,499 at Myx If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED The scenic rides come in a lower resolution than the rest of the workouts—a shame, because the views are HD-worthy—while more mundane features like the Newsy app come in crystal clear. Myx is working to add other streaming apps in the near future as well; hopefully we’ll see Netflix and Hulu in the … Myx.
At $1,499, the Myx Plus costs less than the bikes from Peloton ($1,895), Echelon ($1,559), and SoulCycle ($2,500). Additionally, the package at that price includes the weights and other extras, which makes the workouts a more well-rounded (and less mind-numbing) experience than simply sitting on a bike and grinding away.
Photograph: Myx Fitness You actually have two options when it comes to purchasing Myx gear. There’s a lower-cost Myx package for $1,299, which includes the bike, the tablet, and the Polar heart rate monitor. You can still engage in a wide range of cycling, bodyweight workouts, and yoga or meditation classes with this option, but unless you already have a set of weights at home (or don’t want to get creative with some gallon jugs), you won’t be able to do the workouts that involve the dumbbells and kettlebells.
The Myx Plus option is a better deal at $1,499. The extra 200 bones gets you those three pairs of dumbbells, the kettlebell, mats for the bike and for floor workouts, a resistance band, and a solid foam roller, all of which would cost over $200 if purchased separately.
Access to the personalized workouts requires that aforementioned monthly $29 fee—about the same, if not cheaper, than competing connected fitness programs. Up to five people can build fitness profiles in the app, so one subscription can be shared by a whole family. You get full access to classes with the ability to choose between the four types of workouts, the duration of each, and which of the 17 available coaches you prefer (all of which are amicable, but I’ve singled out some favorites). The app also has a calendar feature that lets you schedule your workouts, and it will send you reminders when it’s time to saddle up.
The membership costs roughly the same as most gym fees, and the combined monthly payments of the gear (if you finance it) and membership is less than a single session of personal training, which ranges between $40 and $70 per hour on average.
The Myx Plus feels like the next level of home gym setups. Instead of banging out five sets of 10 reps with some rusty weights on a cracked leather bench in your garage, you can get the feel of one-on-one personal training with top-tier equipment in your spare bedroom, basement, garage, or wherever you have around 50 square feet to spare. Since I’ve been stuck in the house during the stay-at-home orders, it’s been a great way to stay in shape and get some endorphins going to keep the 2020 blues to a minimum.
Myx Myx Plus Rating: 8/10 $1,499 at Myx If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED $1,499 at Myx Contributor Topics fitness Shopping Exercise Reviews Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
101 | 2,023 | "Best Bike Accessories (2023): Helmets, Locks, Pumps, Rain Gear, and More | WIRED" | "https://www.wired.com/gallery/best-biking-cycling-accessories" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Matt Jancer Gear The Best Bike Accessories Save this story Save Save this story Save Bikes are fantastical machines. Ideal companions, they never complain and they never ask you, “Are we there yet?” An all-day pleasure cruise or a grueling workday commute are no big deal. They return to us far more faithful service than the occasional care we pour into them. That said, most bikes arrive from the factory ready for a casual Sunday joyride but not much else. If you want to put your bike to work hauling cargo or commuting to the office, you'll need some bike accessories to make those journeys comfortable and fun.
Lucky for you, most bicycles are highly and easily customizable, and there’s a mountain’s worth of gear to choose from. Practically all of these accessories will work for non-electric bikes and most electric bikes, too. Take a look at our guides to Ebike Classes and Best Electric Bikes for more.
Updated August 2023: We've added the Lezyne Matrix Saddle Tagger, ODI Rogue Grips, PDW Alexander Graham, and Mile Wide Fork Cork. We've also swapped out a few products for similar models, and updated pricing and availability.
Special offer for Gear readers: Get a 1-year subscription to WIRED for $5 ($25 off).
This includes unlimited access to WIRED.
com and our print magazine (if you'd like). Subscriptions help fund the work we do every day.
If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
For More Comfort Cargo Carriers Bad-Weather Gear Safety First Security Equipment Maintenance Gear So many things these days are a pain in the back. Riding your bike doesn’t have to be one of them. Swapping out handlebar grips, seats, and even seat posts are some of the easiest modifications you can make that’ll significantly improve your ride.
Photograph: Ergon $30 at Amazon $35 at REI Poor wrist posture can lead to carpal tunnel syndrome or cyclist’s palsy, where you're putting pressure on your median and ulnar nerves, respectively. The ergonomic Ergon GA3 are my favorite bike grips because they have small wings that correct your wrist posture to prevent these conditions. Even after long rides, I find my wrists don't have the soreness that I used to suffer from.
Photograph: Brooks $111 at Amazon $150 at REI I haven’t found any cheap or heavily padded gel aftermarket saddles to be much improvement over the seats that come with bikes. The Brooks B17, an old-school legend, is ultra-comfortable despite its stiff leather construction—or perhaps because of it. I've spent hours in its saddle without obtaining the sore spots that accompany riding in soft gel seats. Like a good chair, firm support is more important than pure plushy softness. These saddles are also rugged; they usually last for a decade or more. If you don’t do leather, Brooks makes a nylon option for $130.
Photograph: Amazon $250 at Amazon $250 at Cirrus For some extra shock absorption over rough roads, you can add a suspension seatpost to a fixed, hardtail bike. Some reviewers found its dual coil spring suspension bouncy, but I didn't have that issue at all. Make sure when you're buying the Kinekt that you're buying the right springs for the rider's body weight.
ODI Rogue Grips for $33 : Rogues are such a mountain bike classic that it seems half the companies out there are making clones of them. With a big, knobby rubber texture, they have grip in spades. Personally, I'm not a huge fan of using them barehanded, but if you like mixing a little off-road into your trips or ride with gloves (or just aren't as bothered by them barehanded as me), then they should do the trick.
REI Co-op Link Padded Liner Shorts for $35 ( Women's Sizing , Men's Sizing ) : These add an extra soft layer between rider and machine on longer rides and wick sweat away to keep you from feeling clammy. The chamois padding helps provide comfort on longer rides, too, or for bike saddles that lack sufficient cushioning, as many bikes' standard seats typically do.
Few bikes come with the attachments needed to carry cargo on errands and grocery runs. Whether you wear a backpack or a pannier bag—a style of bag that attaches to a luggage rack that you install over one of your wheels—make sure that you can get real work done by turning your bike into a cargo hauler.
Photograph: Amazon $35 at Amazon If your bike doesn't already have a pannier rack, you'll need to install one if you want to use pannier bags. The Explorer fits most bikes (with and without disc brakes) and carries up to 55 pounds. It only weighs 1.5 pounds, too, so it won't noticeably weigh down your bike. The wide gaps between the deck and outer bars makes attaching and detaching pannier bags a breeze.
Photograph: Herschel $65 at Amazon $80 at Herschel The Heritage was named the best budget bag in our guide to the Best Laptop Backpacks for its padded laptop sleeve that can fit laptops of up to 15 inches and for its tough, 600-denier polyester fabric. After using hers for years, my colleague says it's barely showing any signs of wear.
Photograph: Portland Design Works $102 at Amazon $99 at Portland Design Works If you need to lug around more than the Topeak's 55-pound limit allows, check out this beautiful steel-and-bamboo cargo rack that holds up to 77 pounds. It weighs 3 pounds, but the construction is rock solid. You can mount any standard tailbag or pannier bag to it, as well. The Loading Dock for $115 weighs a pound less and holds up to 35 pounds, thanks to its aluminum construction, if you'd rather save some weight but keep the gorgeous looks.
Handlestash for $38 : Ever try to carry a cup of coffee (or any other drink) home from the café on your bike? We can't recommend going far while holding it in one hand and riding with the other. Unlike a regular cargo basket, the Handlestash's loose fabric and integrated springs absorb enough vibrations to hold a cup of coffee or can of soda without splashing it all over the road. The Gear Team's commerce director Martin Cizmar hasn't taken it off his bike in over a year.
Miles Wide Fork Cork for $29 : The head tube—that vertical pipe linking your handlebars to your front axle—is free storage space. The opening near the tire is almost always open. The Fork Cork is designed to plug up the end of it to keep the inside of the tube free from mud and road debris and, more crucilly, give you a discrete, watertight place to store spare parts, tools, candy bars, and whatever else. You need a tapered steering tube for it to work, and it's designed for mountain bikes with enough clearance between the tire and head tube to get your hand between them.
Rad Power Rad Trailer for $299 : Need to carry a seriously bulky or heavy load? It's best to pull it. This steel bike trailer (with a polymer deck) weighs 25 pounds and can hold up to 100 pounds. Senior associate reviews editor Adrienne So paired it with the soft-sided Rad Trailer Pet Insert for $229.
Together, the combination can transport any pet weighing up to 84 pounds.
Banjo Brothers Grocery Bag for $60 : It holds up to 1,100 cubic inches of storage in a rectangular form that maximizes carrying space. That's large enough for one stuffed grocery bag, but you can always add a second one to the other side of your pannier rack to double your carrying capacity. When not in use, it folds flat against your bike. If you plan to carry your laptop, put it in a water-repellent, padded Incase Laptop Sleeve for $45 to protect it from drizzles, splashes, and impacts.
Wald Basket for $46 : This popular, basic option mounts easily onto the front of your handlebars. Even though it looks just as good empty as it does full, you can't fold it away when it's not in use. The REI Co-op Beyonder Soft Folding Basket for $40 requires a front rack, but it has carrying straps so you can take it into stores with you, and it folds flat when you're not using it.
If you ride enough, you're going to get caught in a storm from time to time, but you don't have to ride soaked and miserable. With the proper rainwear and protective equipment, you can keep yourself (mostly) dry and make riding in the wet a bearable, if not pleasurable, experience.
Photograph: Portland Design Works $21 at PDW These environmentally friendly fenders are made from 97 percent post-recycled bottles and are incredibly easy to pop on and off the bike. They don't provide as much coverage from wet road spray that full fenders provide, but they're easy to take off when the skies are sunny. As long as your bike has a hole in the fork crown, they'll likely fit. There are two versions: MTB (65mm) for bikes with wide tires and and City (48mm wide) for bikes with narrow tires. Make sure you get the right one.
Photograph: Portland Design Works $129 at PDW $129 at REI They're pricey, but I've found that with bike fenders, you tend to get what you pay for. The PDWs provide fuller coverage than a lot of competitors that don't extend as low to the ground, and their aluminum construction is tougher than plastic fenders, with hardly any extra weight. If your bike doesn't have eyelets for fenders, these come with extra hardware you can use to mount them.
Photograph: REI $100 at REI (Men's) $100 at REI (Women's) This hardshell rain jacket will keep your upper body dry during a rainstorm. The women's sizing sells for the same price. Take a look at our guide to the Best Rain Jackets for more picks.
Photograph: Planet Bike $11 at Amazon Cover up your saddle if you know it's going to rain, and you won't have to ride home with wet pants. The Planet Bike cover's elastic drawstring cinches down tight over the seat so that it doesn't threaten to blow off, like some seat covers. Some people use disposable grocery bags, but they tend to need constant replacing, and they can blow away and become litter.
Few American cities are designed with bicycle infrastructure at the top of city planners' minds. And even when you do find yourself in a blessedly welcome bike lane, you have to contend with other cyclists, scooter pilots, and pedestrians. Make sure you're visible with a light and keep that noggin protected with a helmet.
Photograph: REI $80 at Amazon $80 at REI Swaddle your melon with all the protection you can get. Seriously, a trip in the back of an ambulance is much less comfortable than today's well-vented and nicely padded helmets. And stylish, when you're talking about Nutcase. This helmet comes with MIPS, meaning Multi-directional Impact Protection System, which allows the inner liner of the helmet to rotate within the outer shell, reducing the likelihood of rotational brian injuries in the event of a crash.
Photograph: Nutcase $150 at REI The Vio ( 8/10, WIRED Recommends ) has LED lights built-in 360 degrees around the helmet to improve visibility on the road so that you don't need to put separate headlights and tail lights on your bike. It comes with MIPS, too, which means that it can rotate slightly to dissipate the rotational impact force of a crash, and its front light's 200 lumens is good enough to see down city streets, if not completely deserted country roads. It only runs for three hours before you need to recharge it via a mini-USB cable, though. If you're planning on hopping onto an ebike, the Bern Hudson ($140) is rated for up to 27 mph, which is just about as fast as a class-3 ebike can legally go at full speed.
Kryptonite Incite X3 and XR Set for $80: The 30-lux headlamp and 0.06-lux tail lamp aren't the brightest on the market, but they're enough to make sure motorists, cyclists, and pedestrians notice you. They're USB-rechargeable and last for up to 24 and 36 hours on a charge, respectively. For a more compact, convenient alternative, So has been a big fan of the Thousand Magnetic Head Light for $35.
It's USB-C-rechargeable and clamps quickly onto any standard handlebar. Just pop the little 2-ounce light off and into your pocket when you head indoors.
Bookman Fabric Reflective Stickers for $10 : Lights are important for being seen, but an easy way to pick up extra visibility on the cheap is to add reflective stickers to catch cars' headlights. You can stick them on your bicycle, or you can do like senior associate reviews editor Adrienne So and stick some on your cycling clothes.
Portland Design Works King of Ding II Bike Bell for $25 : If you want a polite way to tell people to get out of your way or just give nearby bikers and pedestrians a heads-up, get a bell. (Many states require them.) There's something charming about its classic “ding!” that makes it pleasant—although attention-grabbing—to hear. The Alexander Graham for $28 is the same basic bell, but it allows you to save valuable handlebar real estate by replacing a spacer on the steering tube. That makes it a bit more thief-proof too, especially if you have a locking top cap. Both bells are almost painfully loud and produce a stunningly clear, long-ringing “ding” with each flick of the striker.
Park Tool Rescue Tool for $34 : You could just pick out the necessary hex keys that fit your bike bolts and carry them in your pocket or pannier bag, or you could get a pocketable, bike-specific set like the Rescue Tool. Its 16 included tools fold into a compact package that you can slip into a bag or pocket when you head out for a ride.
Make sure your bike stays your bike with the right locks, GPS trackers, and security bolts. Check out my guide to the Best Bike Locks for more picks and additional tips on how to secure your bike.
Photograph: Amazon $66 at Amazon $67 at Walmart No lock is going to deter the most determined thief with an angle grinder, but at least half the battle of security is making your bike a less attractive target. At 2.9 pounds, the KryptoLok strikes a healthy balance between reasonably light weight with adequate (but not top-level) security. It also comes with Kryptonite's Transit FlexFrame bracket, which lets you mount the lock to your bike's frame for easy transportation around town.
Photograph: Amazon $60 at Amazon For the highest level of security on a bike lock that you can take with you on rides, upgrade to the Granit X-Plus 540. Both ends of the U-bar lock into the cylinder, so in order to grind through this lock a thief would have to do it twice—once on either side of the thick, 13-millimeter-thick bars. Thieves don't like to spend a long time thieving, as it means more chance of being caught, so this is top-notch security.
Lezyne Matrix Saddle Tagger for $17 : Always forgetting where you parked your bike? Worried somebody will walk off with it? Stick an AirTag or Tile tracker under your bike's saddle with this inconspicuous, waterproof tracker mount. Unlike dropping a tracker inside a bike frame or metal mount, the Lezyne's plastic construction gave me no issues with AirTag's range or accuracy. It comes included with a security Torx bolt and took less than a minute to install.
Abus Steel-O-Chain 9809 for $85 : If you lock up at awkward spots, such as fences and railings, you might need something longer than a U-lock. Even though it's heavy at 5.5 pounds, it was plenty flexible and long enough to tie up anywhere, and even around thick-frame ebikes like the Super 73 that won't work with U-locks.
Invoxia GPS Tracker for $129 : Rather than try to shove a Tile tracker inside the frame, you can purchase a stand-alone bike tracker that relies on GPS rather than Bluetooth for much wider coverage. The Invoxia syncs up with a smartphone app to show your bike's location and alert you if it moves, and it lasts from 15 to 49 days on a single charge. The price includes one year of wireless data coverage (additional years are $30 each).
Security bolts.
Bikes use common bolts to make maintenance and assembly easy. It's an unfortunate side effect that using standard bolts makes it easier for thieves to steal the valuable parts off your bike when it's parked. Replace the bolts on the most vulnerable parts (seat post, saddle, handlebars, and wheels) with Pitlock or Hexlox security bolts. The bolts are individually keyed and can be unfastened by a personalized tool that only you own. Thieves hate these things. Secure your front wheel first, then move on from there.
Keeping your bike on the road is usually just a matter of keeping the tires properly inflated and the chain well lubricated. But slack on maintenance, and you could eventually be looking at a repair bill. Fortunately, maintaining a bike is very easy. (Your local shop or REI also offers yearly tune-ups at a reasonable price.) Photograph: Lezyne $70 at Amazon $70 at Backcountry Metal pumps are worth the expense over plastic pumps, which don't tend to last very long. The Lezyne's parts are steel where it counts. It works with the three common valve types (Presta, Schrader, and Dunlop) and inflates tires up to 220 psi, which is well more than enough for most road tires. It's effortlessly quick to change valve adapters, the psi gauge is clear to read without stooping over, and it doesn't take too many pumps to fill a tire.
Photograph: REI $14 at REI $14 at Backcountry You've got to keep the chain clean to keep it functioning correctly. With three adjustable-width brush heads, scrubbing my bike chain took far less time with the Grunge Brush than a typical straight brush, and I got into the chain's nooks and crannies better, especially on the side that faces the bike frame. I use a Grunge Brush on my motorcycles, too, but I switched over to this version that has a narrower brush for bicycle's narrower chains because it takes less finagling to get it to clean three sides at once.
Park Tool Bicycle Chain Cleaning Kit for $40 : You're going to need to periodically clean the bike chain to keep it from gunking up and malfunctioning. This kit includes a brush and a bottle of cleaning solution, in addition to the cleaning tool itself.
Feedback Sports Sport Mechanic Repair Stand for $180 : Sooner or later, you're going to need to work on your bike with the wheels off the ground. Before you think you'll just get cute and save a buck by balancing the bike on a box, take it from me: it'll fall, and either you or the bike will end up hurt. It's happened to me. This stand is steady, with three feet versus two, and has a sturdy clamping mechanism with adjustable height.
Finish Line Dry Lube for $10 : The chain needs to stay lubricated to work properly. Every so often (and after cleanings), spray it with the little dry lubricant, which is slightly less messy than the old-fashioned lube. Wash your hands after using this stuff; it has Teflon in it.
Finish Line Brake Cleaner for $9 : If your bike has disc brakes instead of pull-brakes or calipers, you're going to need to clean the brake hardware periodically to keep your stopping performance from degrading. This stuff removes stubborn brake dust, which is difficult to remove without purpose-made solvents.
Sram Disc Brake Bleed Kit for $55 : For years, hydraulic braking systems (operated by fluid) were found only on high-end bikes. In recent years, though, they've begun to trickle down to mid-priced bikes, especially on bikes with electric motors. They provide smoother stopping but require the owner to replace the fluid regularly. You need a special bleeder tool, such as this one, to replace fluid. This kit comes with a bottle of brake fluid, but if you need more, a 4 fluid-ounce bottle of Sram 5.1 Dot Fluid for $14 should do the trick.
Feedback Sports Velo Hinge Bike Rack for $32 : Whether you stash your bike in a cramped apartment at night or in a garage, you could always stand to free up a little more room. It was easy for me to mount the Velo using three anchor bolts in a wall. It holds up to 50 pounds, and once mounted the bike could be swung nearly flat against the wall and out of the way.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Gear Wish List: 47 Awesome Gifts for All the Enthusiasts, Connoisseurs, and Fanatics in Your Circle WIRED Staff Gear The PlayStation Portal Turns Your PS5 Into a Handheld, Sorta Eric Ravenscraft You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Writer and Reviewer Topics bikes e-bikes gear Shopping cycling Bicycles buying guides Nena Farrell Martin Cizmar Scott Gilbertson Matt Jancer Medea Giordano Adrienne So Chris Haslam Medea Giordano WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
102 | 2,018 | "WPA3 Wi-Fi Security Will Save You From Yourself | WIRED" | "https://www.wired.com/story/wpa3-wi-fi-security-passwords-easy-connect" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Brian Barrett Security The Next Generation of Wi-Fi Security Will Save You From Yourself Alyssa Foote Save this story Save Save this story Save There are more Wi-Fi devices in active use around the world—roughly 9 billion—than there are human beings. That ubiquity makes protecting Wi-Fi from hackers one of the most important tasks in cybersecurity. Which is why the arrival of next-generation wireless security protocol WPA3 deserves your attention: Not only is it going to keep Wi-Fi connections safer, but also it will help save you from your own security shortcomings.
The Wi-Fi Alliance, a trade group that oversees WPA3, is releasing full details today, after announcing the broad outlines in January. Still, it'll be some time you can fully enjoy its benefits; the Wi-Fi Alliance doesn’t expect broad implementation until late 2019 at the earliest. In the course that WPA3 charts for Wi-Fi, though, security experts see critical, long-overdue improvements to a technology you use more than almost any other.
“If you ask virtually any security person, they’ll say don’t use Wi-Fi, or if you do, immediately throw a VPN connection on top of it,” says Bob Rudis, chief data officer at security firm Rapid 7. “Now, Wi-Fi becomes something where we can say hey, if the place you’re going to uses WPA3 and your device uses WPA3, you can pretty much use Wi-Fi in that location.” Start with how WPA3 will protect you at home. Specifically, it’ll mitigate the damage that might stem from your lazy passwords.
A fundamental weakness of WPA2, the current wireless security protocol that dates back to 2004, is that it lets hackers deploy a so-called offline dictionary attack to guess your password. An attacker can take as many shots as they want at guessing your credentials without being on the same network, cycling through the entire dictionary—and beyond—in relatively short order.
'They’re not trying to hide the details of the system.' Joshua Wright, Counter Hack “Let’s say that I’m trying to communicate with somebody, and you want to be able to eavesdrop on what we’re saying. In an offline attack, you can either passively stand there and capture an exchange, or maybe interact with me once. And then you can leave, you can go somewhere else, you can spin up a bunch of cloud computing services and you can try a brute-force dictionary attack without ever interacting with me again, until you figure out my password,” says Kevin Robinson, a Wi-Fi Alliance executive.
This kind of attack does have limitations. “If you pick a password that’s 16 characters or 30 characters in length, there’s just no way, we’re just not going to crack it,” says Joshua Wright, a senior technical analyst with information security company Counter Hack. Chances are, though, you didn’t pick that kind of password. “The problem is really consumers who don’t know better, where their home password is their first initial and the name of their favorite car.” If that sounds familiar, please change your password immediately. In the meantime, WPA3 will protect against dictionary attacks by implementing a new key exchange protocol. WPA2 used an imperfect four-way handshake between clients and access points to enable encrypted connections; it’s what was behind the notorious KRACK vulnerability that impacted basically ever connected device. WPA3 will ditch that in favor of the more secure—and widely vetted—Simultaneous Authentication of Equals handshake.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There are plenty of technical differences, but the upshot for you is twofold. First, those dictionary attacks? They’re essentially done. “In this new scenario, every single time that you want to take a guess at the password, to try to get into the conversation, you have to interact with me,” says Robinson. “You get one guess each time.” Which means that even if you use your pet’s name as your Wi-Fi password, hackers will be much less likely to take the time to crack it.
The other benefit comes in the event that your password gets compromised nonetheless. With this new handshake, WPA3 supports forward secrecy, meaning that any traffic that came across your transom before an outsider gained access will remain encrypted. With WPA2, they can decrypt old traffic as well.
When WPA2 came along in 2004, the Internet of Things had not yet become anything close to the all-consuming security horror that is its present-day hallmark. No wonder, then, that WPA2 offered no streamlined way to safely onboard these devices to an existing Wi-Fi network. And in fact, the predominant method by which that process happens today—Wi-Fi Protected Setup—has had known vulnerabilities since 2011.
WPA3 provides a fix.
Wi-Fi Easy Connect, as the Wi-Fi Alliance calls it, makes it easier to get wireless devices that have no (or limited) screen or input mechanism onto your network. When enabled, you’ll simply use your smartphone to scan a QR code on your router, then scan a QR code on your printer or speaker or other IoT device, and you're set—they're securely connected. With the QR code method, you’re using public key-based encryption to onboard devices that currently largely lack a simple, secure method to do so.
“Right now it’s really hard to deploy IoT things fairly securely. The reality is they have no screen, they have no display,” says Rudis. Wi-Fi Easy Connect obviates that issue. “With WPA3, it's automatically connecting to a secure, closed network. And it’s going to have the ability to lock in those credentials so that it’s a lot easier to get a lot more IoT devices rolled out in a secure manner.” Here again, Wi-Fi Easy Connect’s neatest trick is in its ease of use. It’s not just safe; it’s impossible to screw up.
'Right now it’s really hard to deploy IoT things fairly securely.' Bob Rudis, Rapid 7 That trend plays out also with Wi-Fi Enhanced Open, which the Wi-Fi Alliance detailed a few weeks before. You've probably heard that you should avoid doing any sensitive browsing or data entry on public Wi-Fi networks. That's because with WPA2, anyone on the same public network as you can observe your activity, and target you with intrusions like man-in-the-middle attacks or traffic sniffing. On WPA3? Not so much. When you log onto a coffee shop’s WPA3 Wi-Fi with a WPA3 device, your connection will automatically be encrypted without the need for additional credentials. It does so using an established standard called Opportunistic Wireless Encryption.
“By default, WPA3 is going to be fully encrypted from the minute that you begin to do anything with regards to getting on the wireless network,” according to Rudis. “That’s fundamentally huge.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As with the password protections, WPA3's expanded encryption for public networks also keeps Wi-Fi users safe from a vulnerability they may not realize exists in the first place. In fact, if anything it might make Wi-Fi users feel too secure.
“The heart is in the right place, but it doesn’t stop the attack,” says Wright. “It’s a partial solution. My concern is that consumers think they have this automatic encryption mechanism because of WPA3, but it’s not guaranteed. An attacker can impersonate the access point, and then turn that feature off.” Even with the added technical details, talking about WPA3 feels almost still premature. While major manufacturers like Qualcomm already have committed to its implementation as early as this summer, to take full advantage of WPA3’s many upgrades, the entire ecosystem needs to embrace it.
That’ll happen in time, just as it did with WPA2. And the Wi-Fi Alliance’s Robinson says that backward interoperability with WPA2 will ensure that some added security benefits will be available as soon as the devices themselves are. “Even at the very beginning, when a user has a mix of device capabilities, if they get a network with WPA3 in it, they can immediately turn on a transitional mode. Any of their WPA3-capable devices will get the benefits of WPA3, and the legacy WPA2 devices can continue to connect,” Robinson says.
Lurking inside that assurance, though, is the reality that WPA3 will come at a literal cost. “The gotcha is that everyone’s got to buy a new everything,” says Rudis. “But at least it’s setting the framework for a much more secure setup than what we’ve got now.” Just as importantly, that framework mostly relies on solutions that security researchers already have had a chance to poke and prod for holes. That hasn't always been the case.
“Five years ago the Wi-Fi Alliance was creating its own protocols in secrecy, not disclosing the details, and then it turns out some of them have problems,” says Wright. “Now, they’re more adopting known and tested and vetted protocols that we have a lot more confidence in, and they’re not trying to hide the details of the system.” Which makes sense. When you’re securing one of the most widely used technologies on Earth, you don’t want to leave anything to chance.
How Square made its own iPad replacement PHOTO ESSAY: Hong Kong's vanishing rooftop culture Turning an old Volvo concept into a $155,000 hybrid Going to the World Cup? Leave the laptop at home The hustlers fueling cryptocurrency’s marketing machine Get even more of our inside scoops with our weekly Backchannel newsletter Executive Editor, News X Topics Wi-Fi IoT encryption Kate O'Flaherty Lily Hay Newman Reece Rogers Lily Hay Newman Lily Hay Newman Lily Hay Newman Andy Greenberg Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
103 | 2,021 | "The Top Windows 11 Features (and How to Install Them) | WIRED" | "https://www.wired.com/story/most-important-things-microsoft-announcement-windows-11-android-apps" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Eric Ravenscraft Gear The Most Important New Features in Windows 11—and How to Get Them Windows 11 will be free to everyone who owns Windows 10, this fall.
Photograph: Microsoft Save this story Save Save this story Save Microsoft Is officially rolling out Windows 11, the next major version of its operating system (after announcing it back in June). It adds a revamped Start Menu, better multi-monitor and touchscreen support, tighter integration with Xbox Game Pass, and a renewed push for the Windows Store.
We'll go over all the new features, but first, here's how to download Windows 11 to your device.
Microsoft is making some substantial (and at times controversial ) changes to compatibility requirements for Windows 11. Upgrading is free if you're running Windows 10, but only CPUs from Intel's 8th generation and AMD's Ryzen 2000 line and newer are officially eligible (with a few exceptions ). If your hardware is eligible, the update will automatically roll out by mid-2022.
Unofficially, however, many more machines with older CPUs will work with the new OS version, but you'll have to install Windows 11 manually by creating an upgrade tool from the Windows 11 software download page and installing it yourself. When you do, you might get a warning that your hardware isn't officially supported, and it might not be entitled to receive future updates.
You can use Microsoft's recently updated PC Health Check app to find out if your computer is officially eligible. This will also help you find out if you have or need to enable the Trusted Platform Module 2.0, which is an official eligibility requirement and might be the only thing holding up some machines from being eligible. Your computer may already have it, but it might be disabled, and you can turn it on in your system's BIOS.
You can also bypass the TPM check using a tweak to your registry that Microsoft outlines (and advises against for most people) here.
If this all seems complicated, it's because it is. The short version is, if you have a new computer made after around 2017 or so, there's a good chance that you'll receive a Windows 11 update notification eventually. However, if you built your own machine or have an older computer, you might need to research and do some tinkering to install Windows 11.
As stated above, if you have officially eligible hardware, installing Windows 11 is easy. You can wait for the free update to roll out to your machine, just like any other update, and install it when you get a notification that it's ready. Alternatively, you can check for the update manually by heading to Settings > Windows Update and clicking “Check for updates.” If your hardware isn't eligible, however, you'll need to download Windows 11 and create an upgrade tool. You can use this file to create your own bootable USB drive. Make sure you have an empty USB stick with at least 8 GB of space and then follow Microsoft's guide to creating a bootable drive.
As we said above, doing this on hardware that's not officially eligible might result in not qualifying for future updates.
You’d be forgiven if the new features in Windows 11 sound familiar. Microsoft has added widgets , translucent windows , and window snapping.
All of these features have been around for a while, but the approach here looks better. In fact, most of the new features are designed around a theme of incremental improvement rather than wholesale overhaul (which is good, because we all remember Windows 8.
) With the exception of one minor change that might be quite polarizing … The biggest visual difference in Windows 11 is how the Start button on the taskbar is centered as opposed to being on the far left of the screen. There is an option to move it back to the corner if you’re not willing to retrain your muscle memory, but Microsoft is keen to mimic the MacOS and Chrome OS look.
The new Start Menu has been reworked to remove Live Tiles (only marginally useful in the past). Instead, there are a set of pinned apps and recent documents. A search interface at the top of the menu, much like the Start Menu today, will intelligently search for the documents, apps, or settings you’re trying to find.
For years, Windows has been a bit of a fractured mess, with newer, sleek user interface elements mixing with old. Windows 11 finally updates many features that have looked out of place in the past, and that means you'll see new designs more frequently.
Two of the biggest places you'll notice a change are in File Explorer, and any time you right-click to get a context menu. In the latter case, common actions like cut, copy, paste, and rename have been moved to a smaller, accessible bar next to the mouse with just an icon, while other features like Properties or “Open in new window” are still listed out with an icon and label like you're used to. It's cleaner, but it can take some getting used to.
Microsoft tried to make widgets happen for years before abandoning them, but this might ( might ) be the version that sticks. A new button in the taskbar will open a widget panel with a to-do list, weather, traffic, calendar, and other basic widgets. This isn’t too different from how widgets work in MacOS, available when you want to take a glance but disappearing when you don’t need them. Eventually, the feature will be open to developers, so expect to see more third-party widgets down the road.
Laptop users that dock their computer into a separate monitor are all too familiar with the hassle that comes from managing all their windows. Once you disconnect the monitor, any windows on that monitor get resized and shuffled around, creating a mess on your desktop. Windows 11 puts an end to that. When you unplug your laptop from a second monitor, any open windows on that screen will minimize but remember their place. When you plug the screen back in, they’ll pop right back to where they were before.
Windows 11 also makes virtual desktops (introduced in Windows 10 in limited form) much more powerful and useful. There's a new desktop menu in the taskbar, but there's also good keyboard support. Out of the box, pressing Alt-Win on the keyboard will move through your virtual desktops much like alt-tab moves between applications.
Windows’ current snapping feature is useful if you want to put two windows side by side, but you have to do any other arranging yourself. Windows 11 changes that. Now, when you hover over the Maximize button on a window, you’ll see a small arrangement selector, showing you different layouts you can snap windows to, including three- or four-window layouts. You can then select which windows to fill in the rest of the layout and get to work quicker.
Another in the category of features that Microsoft discontinued only to bring back, Windows 11 once again introduces a translucent window design. Apps and window borders—including the Start Menu and widget menu—will be semi-see-through, like a frosted-glass window. It’s a nice look and probably won’t have the same performance issues on lower-end hardware the last time Microsoft tried this trick.
While Microsoft’s hardware team makes some great convertible laptops and tablets , the software hasn’t quite kept up. Windows 11 hopes to fix some of the most annoying problems by adding larger touch targets for resizing windows. There’s also a smaller touch-typing keyboard that can sit in the corner of the screen for one-handed typing, not unlike how you might type on your phone.
If you use a stylus, the OS will also support haptic feedback, which might make writing feel more responsive. It remains to be seen if these changes are enough to make Windows a natural touchscreen experience on a tablet, but it can’t be worse than switching entirely into a Tablet Mode like Windows 10 does now.
Like Zoom, Microsoft Teams saw a massive uptick in usage since March 2020 , for obvious reasons. So it makes sense that Microsoft is tying Teams more tightly into its newest operating system. The Chat icon in the taskbar launches a list of your recent contacts where you can pick up a conversation where you left off, or start a new one. When you receive a message, you'll even be able to reply directly to the notification itself.
The downside is that Microsoft Teams is enabled by default in Windows 11, so if you don't use it, you might want to turn it off.
With Microsoft owning two of the biggest gaming platforms in the world—Windows for PC gaming and the Xbox—you’d think that combining the two would be a higher priority. Well, Windows 11 is finally making this a reality by bringing some Xbox features to PC.
First, there's the DirectStorage API, which lets games load data directly into your graphics card's memory, drastically cutting down on load times. The process is a little more complicated than that brief description makes it sound, but if you have the hardware and games that support it, you’ll be spending a lot less time waiting to play.
Another major Xbox feature now available on PCs is called Auto HDR. For games created using DirectX 11 or later, this feature can automatically upgrade games that previously used only SDR to the much richer and vibrant HDR standard.
This won’t magically make games take full advantage of HDR the same way game artists who intended the game to use the full range of HDR colors from the beginning would, but it’s a welcome quality-of-life update. Especially for your latest Skyrim play-through.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So And speaking of games developed by Bethesda, the final and perhaps biggest Xbox-related change is that the Xbox app comes built-in. The app provides access to your library of games purchased through the Xbox Store, including those that are a part of Microsoft’s wildly popular Game Pass subscription.
The Xbox app also enables Game Pass subscribers to stream games from the cloud via the company’s xCloud technology.
Similar to Google’s Stadia , xCloud lets players run games on Microsoft’s servers and stream the audio and video back to their computer. This lets players run demanding games on PCs with minimal specs, right from an app that comes built into Windows.
Right now, the Windows Store isn’t terribly useful, because it allows only UWP (Universal Windows Platform) apps—that is, apps specifically designed to work across a wide range of Windows devices like laptops, tablets, and phones. Most developers weren’t willing or able to rewrite their apps for this format, especially since Microsoft initially charged the same 30 percent cut for any sales made on the Windows Store as competitors like Apple and Google.
That all changes with the new Microsoft Store. After allowing game developers to upload win32 versions (read: the format that almost every Windows app you use comes in) to the store in 2019, Microsoft is extending that flexibility to everyone. Now app developers can upload win32 versions of apps, as well as any other app framework.
Much more importantly, developers have the option of using their own payment system (or, as Microsoft clumsily calls it, “commerce engine”) to charge customers for using their apps. This means that major players like Adobe and Disney don’t have to hand over 12 to 15 percent of their revenue for the privilege of being on Microsoft’s store. Now that companies don’t have to jump through major hoops like rewriting their apps or forking over tons of cash to Microsoft, there’s a decent chance you might actually be able to use the Microsoft Store to find and manage apps you care about.
Finally, Microsoft is bringing Android apps to Windows via perhaps one of the weirdest ways: through the Amazon Appstore. Within the Microsoft Store, you’ll be able to search for Android apps. If an app is available, it will prompt users to download it “from Amazon Appstore,” which means it will be tied to your Amazon account, not your Google one. If you were hoping to download paid apps you bought via Google, you’ll have to buy them again. This compatibility is made possible through Intel’s Bridge technology , so we’ll have to see it in action to gauge how well it works, but at least in principle, it could be a handy way to get access to a few apps that are out of reach on Windows today.
📩 The latest on tech, science, and more: Get our newsletters ! Is Becky Chambers the ultimate hope for science fiction? Valley fever is spreading through the western US How a Google geofence warrant helped catch DC rioters Why robots can't sew your t-shirt Amazon's Astro is a robot without a cause 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Product Writer and Reviewer X Topics Windows Microsoft software operating systems Jaina Grey Simon Hill Eric Ravenscraft Brenda Stolyar Reece Rogers Julian Chokkattu Adrienne So Julian Chokkattu WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
104 | 2,021 | "I Use Motion Smoothing on My TV—and Maybe You Should Too | WIRED" | "https://www.wired.com/story/motion-smoothing-defense-hdtv" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Whitson Gordon Gear I Use Motion Smoothing on My TV—and Maybe You Should Too Illustration: Elena Lacey; Getty Images Save this story Save Save this story Save For years, new TVs have come with a feature called frame interpolation, or motion smoothing, enabled by default. By creating new frames in between the ones encoded in the movie, it makes motion clearer. But it also imparts an almost artificial look, as if the movie were shot like a soap opera on cheap video. So cinephiles—including many here at Wired —have raged against this feature for years, to the point that it's become a meme starring Tom Cruise.
As a tech writer who reviews TVs, I've kept my feelings mostly under wraps, but it's time to come clean: I actually use motion smoothing at home.
Before you break out the pitchforks and tiki torches, hear me out: It's not as bad as it sounds. I still hate the way it looks out-of-the-box on most TVs. I use it on its lowest setting and only on TVs that can actually do the job well. In other words, I wouldn't say Tom Cruise was 100 percent right about motion smoothing —but maybe that he's 80 percent right.
If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more Learn more.
When early filmmakers were shooting the first motion pictures, they tried a variety of frame rates, eventually settling on 24 frames per second.
This wasn't some magic number that created a certain "filmic" effect, like we think of it today—it was, in part, a cost-saving measure. Film stock doesn't grow on trees.
It's enough to give the illusion of motion, but it isn't really continuous, says Daniel O'Keeffe, who does in-depth display testing at RTINGS.com.
He uses the example of a tennis ball flying through the air: "If you were watching the game in person, you could track the ball smoothly and it may always appear in the center of your vision. This results in clear, smooth motion." But on film, you aren't actually seeing motion—you're seeing a series of still images shown at a rate of 24 per second. This isn't a huge problem in a movie theater, where typical projectors use a shutter to black out the screen in between frames. During these blackout periods, he says, "Your eyes 'fill in' the intermediate image due to a phenomenon called persistence of vision." This makes the motion appear smooth, despite its relatively low frame rate. Old CRT and plasma-based displays had inherent flicker that resulted in a similar effect.
But modern LCDs use what's called sample and hold: They draw the image super fast, then hold it there until the next frame. (You can see it in action in this video from The Slow Mo Guys ). So your eye attempts to track an object moving across the screen , but that object isn't always where your eye expects it to be. It's still held in its previous position, and there's no black flicker to give your eyes a chance to "fill in" the missing information. So the image appears to stutter and blur, especially in shots that pan across the scene too quickly. You can see a more visual representation of this in RTINGS' video series on motion, embedded below.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Some people don't notice or care about this stutter. Other people, like me, are more sensitive to it and find it uncomfortable to watch. Certain TVs are more prone to it, too, depending on their response time—their ability to shift colors quickly. Cheaper TVs with low response times stutter less, instead causing a moving trail behind objects. TVs with fast response times—like high-end LCDs and especially OLEDs—have less of a ghosting trail but will stutter more. Neither is really ideal, and neither will give you motion as clear as a CRT or plasma display would. So dweebs like me can't watch a movie on modern sets without silently cursing under their breath about how the movie looks like a slow, messy flip book.
(A quick note for the TV nerds: I'm talking about 24 frame-per-second stutter here, not the telecene judder produced by using 3:2 pulldown to fit 24 frames into a 60-Hz refresh rate. That's an entirely different phenomenon, though many people conflate the two. You can fix telecine judder by using a streaming box capable of outputting 24 Hz properly, like the Apple TV 4K or Roku Ultra (here's our guide to picking the best Roku).
Not all streaming services will support proper 24-Hz playback, though, so a TV that can reverse this pulldown process is also helpful.) So here we come to the crux of my dilemma. Twenty-four frames per second is not an ideal frame rate for modern displays, but it's what we're all used to, and it doesn't seem to be going away soon.
Sample-and-hold displays are sticking around for now too, but the latest models attempt to combat these motion issues with two primary features: black frame insertion and the dreaded motion interpolation. I won't get into the nitty-gritty of black frame insertion too much, but RTINGS has a great explainer on how it works and what some of its downsides are. On most TVs, it dims the picture significantly and causes a flicker that some people find uncomfortable—not to mention image duplications that can mar the image.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Which brings us back to frame interpolation, aka motion smoothing. And yes, its default settings are usually far too dramatic. But I've found that lower settings are less offensive. A bit of interpolation adds just enough information to "clean up" the picture during moving scenes, giving you a clearer, less stuttery image without making it look like an episode of Days of Our Lives.
That said, finding this balance can vary from TV to TV, and some brands do it better than others. Remember, the TV is taking frames from your movie and guessing how frames in between them should look—which can result in artifacts, or glitches, in the picture when it guesses wrong. O'Keefe says these artifacts are more common on higher interpolation settings, but it depends on the TV, its interpolation algorithm, and its processing power—and, to an extent, on how much you notice them to begin with.
In my experience, no one does it better than Sony, who has a reputation among A/V enthusiasts for having the best motion processing. This is, in big part, due to their Cinemotion feature, which has been present on Sony TVs for many years. The company tells me this feature uses de-telecining (to reverse that 3:2 pulldown judder) and tiny amounts of frame interpolation to present 24-fps content the way you expect to see it, rather than the way modern sample-and-hold displays show it in its purest form. Most people probably don't even realize this is happening, especially since Sony's main Motionflow interpolation feature is separate from the more subtle Cinemotion setting: Even if you turn Motionflow's Smoothness down to zero, there's still a bit of interpolation happening in the background with Cinemotion on.
But part of Sony's reputation is also due to its fantastic processing algorithms, which can interpolate frames with fewer artifacts than competing brands. And ultimately, it's why I bought a Sony TV after many years of motion-induced frustration—no other brand could hit that sweet spot quite as well without side effects. Their current flagships, the X950H LED and A8H OLED , use their most advanced processing hardware, and having had personal experience with both, they're the models I'd recommend looking at if you want the best motion on a modern TV. But you can try it on your current set, too—you just need to play with the settings.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Each brand calls its interpolation feature something different, and the settings can even vary between models from the same brand. But if you dig into the options, you'll almost certainly find it under Motion. Samsung calls it Auto Motion Plus, for example, while LG calls it TruMotion. Vizio just calls it Smooth Motion Effect, and TCL calls it Action Smoothing. Bump it up by one or two notches, give yourself time to get used to the subtle differences, and see what you think—you'll also find the black frame insertion feature in that menu, if your TV has one, and you can use them in conjunction with one another if your TV has a good implementation.
Sony's motion settings are a tad more complicated than other brands, but I've found that turning Cinemotion off, with Motionflow's Smoothness and Clearness both set to 1—the lowest settings for interpolation and black frame insertion—produces the best motion to my eyes. These three settings all interact with one another differently, so you may have to try different combinations to see what you like best.
So I'm sorry, Mr. Cruise: I watched Mission: Impossible Fallout with motion smoothing turned on. (It was still awesome, by the way). But, while it's ultimately personal preference, there are still times when I recommend disabling it entirely.
First and foremost, you should always turn off motion processing when gaming. Because the TV has to know the next frame to generate interpolated motion, O'Keefe says, having it turned on will inherently introduce input lag. So you'll get smoother motion, but the controls won't be as responsive, making those boss battles more difficult. (That's why your TV has a Game Mode, which turns off motion interpolation alongside lots of other behind-the-scenes processing.) Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So In addition, he says, motion smoothing can be hit or miss for sports. While a lot of people like the added clearness it provides, it can also produce more artifacts in fast-paced play—like a hockey puck disappearing during slap shots.
I've also found certain movies to be more affected by interpolation than others, especially on certain TVs. On my old LG OLED, for example, even low levels of interpolation introduced noticeable soap opera effect on films like Captain America: Civil War , which have scenes that use a strobing effect to minimize motion blur.
Other movies, like Spider-Man: Into the Spider-Verse , use clever frame-rate tricks in their animation to tell the story, and interpolation can interfere with that. So you may want to set up a few different settings profiles you can flip between at will.
Ultimately, though, it's all up to you. I'm not here to tell you that motion interpolation on its highest setting is a crime against cinema, nor that pure 24 Hz is a motion-sickness-inducing atrocity. As always, there's a tug-of-war between accuracy and preference, and you're free to do whatever you want with your TV. But if you're picky about motion like me, you might find this solution creates a happy balance that's easier on the eyes. Just don't throw me in the TV-reviewer stockade.
📩 The latest on tech, science, and more: Get our newsletters ! Your body, your self, your surgeon, his Instagram The untold history of America’s zero-day market How to have a meaningful video chat … with your dog All these mutant virus strains need new code names Two paths for the extremely online novel 🎮 WIRED Games: Get the latest tips, reviews, and more 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Contributor X Topics TVs displays home entertainment 4K Video Television Simon Hill Saniya Ahmed Jaina Grey Simon Hill Simon Hill Brendan Nystedt Simon Hill Scott Gilbertson WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Finish Line Coupon Take $10 off Your Order - Finish Line Coupon Code Groupon Promo Code Groupon promo code: Extra 30% off any amount Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
105 | 2,023 | "Lenovo ThinkPhone by Motorola Review: Getting There | WIRED" | "https://www.wired.com/review/lenovo-thinkphone-by-motorola" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Julian Chokkattu Gear Review: Lenovo ThinkPhone by Motorola Facebook X Email Save Story Photograph: Motorola Facebook X Email Save Story $700 at Motorola $700 at Lenovo If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED Rating: 7/10 Open rating explainer Last year, I said Motorola’s Edge 2022 felt like the “ first good Motorola phone in a while.
” Well, the company’s two for two now. Except this isn’t your usual Motorola smartphone—the Lenovo ThinkPhone by Motorola is a collaborative effort between the smartphone company and its parent, specifically the Lenovo division with a cult-like following for its ThinkPad business laptops.
The guts feel the same as last year’s Motorola Edge, with improvements here and there, like an IP68 water-resistance rating to protect it from spills, pool dips, and rain and a flagship-grade processor to keep it running smoothly. It looks a whole lot smarter—classy, as Jim Halpert would put it —and there’s even a little red configurable button on the edge of the handset to synergize with the iconic red nub on Lenovo’s PCs. The ThinkPhone isn’t my first or second Android phone choice for most people. Still, if you don’t want a Samsung Galaxy or Google Pixel, it’s a nice alternative, especially for anyone already rocking a ThinkPad.
Photograph: Motorola I’m currently testing the ThinkPhone while using a ThinkPad, and although you don’t get any exclusive features when you pair the devices, it sure as heck feels nice to have a shared aesthetic. The ThinkPhone has Gorilla Glass Victus protecting the screen and an aluminum frame with an aramid fiber rear inlay that matches the weave design and soft-touch texture on the back of the ThinkPad. I'll keep saying synergy because I’m pretty sure that’s what the designers repeated in the drawing room.
There’s a red button on the top-left edge of the phone, and you can set a single press to whatever you want. I use mine to open Google Wallet. The downside is that you can’t configure the double-press. It forces open Motorola’s Ready For service, which you can use to cast your phone’s apps to nearby displays (along with some other functions). I will elaborate below. Naturally, the gestures you’ll find on every other Motorola phone are also present, so you can make a chopping action twice to turn on the flashlight or a double twist to launch the camera. They’re super handy, and I use them every day.
The Ready For feature is available on other Motorola phones, but it lets you pair the ThinkPhone with a PC to unlock perks like using your smartphone as a webcam, running Android apps in a virtual space on your laptop, responding to notifications, and sharing files. I mostly use it for universal copy and paste—like MacBooks and iPhones, you can copy something on your laptop and paste it on your phone (and vice versa). You don’t need a ThinkPad for this to work, just a Windows machine.
ThinkPads are known for their plethora of ports. The one I’m using has two USB-C ports, a headphone jack, an HDMI, and two USB-A ports. Weirdly, the ThinkPhone just has a single USB-C port and … nothing else. Not including a headphone jack or even a microSD card slot feels like a missed opportunity. At least you get 256 gigabytes of built-in storage, which is more than most phones at this price. Oh, and although it’s not always the case on Motorola phones, yes, there is an NFC sensor so you can make contactless payments.
Lenovo ThinkPhone by Motorola Rating: 7/10 $700 at Motorola $700 at Lenovo If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED I like the 6.6-inch screen. It’s large, but since the phone is more narrow than wide, it’s still easy to grasp. You’re treated to a 144-Hz OLED panel , which is plenty sharp, bright, and colorful—no complaints here. I set it to 120 Hz because that feels plenty fluid for me and saves a little battery life. Speaking of which, you get a 5,000-mAh battery cell that, even with higher-than-average use, comfortably lasted me two full days. Hooray! OK, I do have one quibble with the screen. Motorola is just about the only Android phone maker to not offer an always-on display, which typically lets you see the time and notifications without having to pick up or touch the phone. Instead, Motorola employs Peek Display, which requires you to interact with the phone to see the clock and alerts. I get it, not everyone wants an always-on screen, but it’d be nice to have the option.
Motorola does buck the trend of not including a charger in the box by including … a 68-watt charger.
A little overkill! It doesn’t recharge the phone scarily fast like a OnePlus handset , but it can recharge your ThinkPad; no need to lug around your bulky laptop charger. There’s also support for wireless charging on the ThinkPhone, which I always love to see. Yes, I’m lazy. I’d rather not fish for a cable in the dark before bed.
You don’t have anything to worry about when it comes to performance. Sure, the ThinkPhone is powered by last year’s flagship processor, the Qualcomm Snapdragon 8 Plus Gen 1, but it’s been delivering plenty of computing power for scheduling all my emails before my upcoming vacation, and even for taking out some baddies in Streets of Rage 4 when I’m killing time.
That leaves us with the camera system, which is this phone’s weakness. It’s by no means poor. I was able to snap some nice atmospheric photos at a Hiatus Kaiyote concert in Brooklyn last week with the 50-megapixel primary sensor. It’s capable, even in low light, though you need to use Motorola’s Night Vision mode and stay super still. The colors look natural, and there’s usually solid exposure. The ultrawide and selfie cameras are also serviceable, though I found the latter lackluster, as it picks up fewer details and my skin tone sometimes looks wonky.
The problem is that the Pixel 7A and Galaxy S23 offer a superior camera experience overall. In fact, those two phones are better in a lot more ways than just the camera. The Pixel has tons of helpful smarts, like Call Screen so you never have to deal with spam calls, and it’s $200 cheaper than the $699 ThinkPhone. The S23 has an extra telephoto camera, a super-bright display, and even better performance. Both also have more generous software update policies. The Pixel will get three Android OS upgrades and five years of security updates, while Samsung goes the extra mile with four OS upgrades.
Lenovo ThinkPhone by Motorola Rating: 7/10 $700 at Motorola $700 at Lenovo If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED 1 / 8 Motorola is offering three OS upgrades and four years of security updates. That beats Motorola’s track record, but it’s not just about the number of updates. It’s about delivering them in a timely fashion, and Samsung and Google are much faster at this. Take the Edge from 2022. It’s still on Android 12, the version of Android it launched with, and has yet to receive Android 13. It’s great that Motorola is promising lengthier support windows, but I still don’t trust it to deliver them in a reasonable time frame.
The ThinkPhone is technically a business phone, but it’s available at Motorola and Lenovo for anyone to buy. Most people should stick to a Pixel or Samsung, maybe even a OnePlus, but if the above description appeals to you, I think you’ll have a fine time with this Lenovo/Motorola hybrid. Motorola has some ground to cover before it gets a top recommendation from me, but at least it’s on the right path.
Lenovo ThinkPhone by Motorola Rating: 7/10 $700 at Motorola $700 at Lenovo If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED $700 at Motorola $700 at Lenovo Reviews Editor X Topics Shopping phones Android Motorola Lenovo smartphones review Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
106 | 2,021 | "The PS5 Is Starting to Look Like the Revolution It Promised | WIRED" | "https://www.wired.com/story/playstation-5-six-months-later" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Peter Rubin Culture The PlayStation 5 Is Starting to Look Like the Revolution It Promised Six months after its launch, there's a new phase of PlayStation 5 games on the horizon: titles that are leveraging the console’s capabilities to push forward.
Courtesy of Sony Save this story Save Save this story Save Six months after its November 12 debut, the PlayStation 5 is well on its way to being a success story for Sony. As of March 31, the company had sold 7.8 million of the new video game consoles worldwide—enough, in both units and dollars, to make it the biggest console launch in US history.
Bigger than the Nintendo Wii. Bigger than the Xbox One. Bigger than even the PS4. And who knows what that number might be if everyone who wanted one was actually able to buy one.
Console Hunt Simon Hill Ideas C. Brandon Ogbunu Untapped Potential Simon Hill In the world of gaming, PlayStation reigns supreme, but it isn’t Supreme.
It’s not engineering scarcity to create marketing buzz like a streetwear company; it’s trying to get its $399 console into customers’ hands. Which is exactly why, in the same breath that he’s using to discuss how the PS5 is outpacing even its mega-selling predecessor, Sony Interactive Entertainment president and CEO Jim Ryan is apologizing.
“We’re working as hard as we can to ameliorate that situation,” Ryan says on a Zoom call, mere hours after receiving his second Covid-19 vaccination shot. “We see production ramping up over the summer and certainly into the second half of the year, and we would hope to see some sort of return to normality in terms of the balance between supply and demand during that period.” If you’re among the unlucky not-quite-few still having trouble getting their hands on one, you already know the beats of this story too well. Back in November, the twin launches of the PS5 and the Xbox Series X came in the midst of a global lockdown—and what felt like perfect timing for stir-crazy gamers proved to be a perfect storm for sales snafus. The same production and logistical snarls that made it damn near impossible to get a home appliance last year curtailed distribution. In the absence of factory visits and in-person quality checks, vendor relations became far more challenging. When release day finally came, the necessity of online-only sales opened the door for bots and predatory resellers to scoop up big chunks of precious inventory and jack the prices higher than Usher’s falsetto. And then there’s the semiconductor shortage that has affected TV companies and carmakers alike.
So promises of amelioration may feel like cold comfort. But whether you’re among the 7.8 million who already have a PS5, or the millions who might have one if not for that whole unprecedented-global-disruption thing, the real question is whether the PS5 is delivering the experience.
Are developers harnessing its feature set to create games that weren't possible before? Have first-party and indie studios navigated the pandemic well enough to keep the pipeline of exclusive titles stocked? Is the PS5 proving to be, as PlayStation chief architect Mark Cerny promised two years ago , a revolution rather than an evolution? The short answer is yes. The slightly longer and more accurate answer is, it’s getting there.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The day a new game console comes out is about far more than a shiny new piece of hardware. It’s also traditionally a chance for a company to make an argument for said hardware through software—including games that could only be played on that console.
System-sellers can happen anytime, but launch system-sellers are a special breed. Think Halo: Combat Evolved on the original Xbox.
Resogun on the PS4.
Super Mario 64.
The Legend of Zelda: Breath of the Wild.
Hell, think Wii Sports.
(And yes, Nintendo has always leaned on its launch titles harder than anyone. It’s had to—it doesn’t compete on console horsepower, instead attracting customers with gotta-play-it first-party exclusives.) But by 2020, when the Series X and PlayStation 5 arrived to signal the dawn of a new generation, those expectations had changed somewhat. The PS5 may have launched with a dozen titles, but nearly all the standouts, from Spider-Man: Miles Morales to Assassin’s Creed: Valhalla , could be played either on the PlayStation 4 or another platform. On paper, this might have been disappointing. Consider, though, that the best PS5 games weren’t there to be exclusives; they were there to be showcases.
People could web-swing through Manhattan as Miles Morales on their PS4, but they couldn’t fast-travel across the city in mere seconds unless they were on a PS5, with its load-time-killing solid-state hard drive. They could enjoy the scenery on the PS4, but they couldn’t see Miles’ reflection in buildings and puddles as they passed without the ray tracing effects that the PS5 enabled. ”When you can see true reflections in a video game, it's a pretty spectacular moment for players,” says Ted Price, founder and CEO of Spider-Man developer Insomniac Games.
By Brittany Vincent Beyond dialing up eye candy and intensity, those early titles also became a barometer by which Sony Interactive could gauge how developers were utilizing the new features the PS5 made possible—not just ray tracing or the SSD, but 3D audio capabilities, or the robust haptics of the DualSense controller and its “adaptive triggers” that can deliver variable pressure. Just because a machine can do something doesn’t mean developers will take advantage of it, either immediately or at all. But last summer, at a console reveal event, Sony showed off a double handful of titles that would be arriving for the PS5, six of which featured ray tracing. “That’s astonishing,” says Cerny. “I thought ray tracing was something that would be used in second- and third-generation titles. I thought that maybe an early title might show a little bit about the potential, and it would be one of those things where you’d be wondering, as somebody involved with the creation of the hardware, was this worthwhile to be put in, given the associated cost in silicon? And to have that question answered the very first time titles were shown in public was amazing.” Amazing because some console tech never catches on, either because it’s simply not intuitive (Cerny cites the PlayStation Vita’s rear touchpad) or because it takes time to learn the intricacies of a new machine. Anytime a new console is on the horizon, and again when it's released, Cerny travels around the world talking to studios about its capabilities, and he’s heard it all—including literal boos, as when he told one unnamed developer years ago that the forthcoming PS4 might use a bit of Flash memory to help cache data. (The boo worked; Sony moved away from that architecture choice.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Cerny’s most recent developer tour happened virtually, of course, but he was surprised by what he found. “The conversations can be very contentious,” he says. “I actively seek out the people who will have strong opinions, who clearly lay out all the issues they're having with the hardware, so that we can get busy thinking about how we can address those in the future.” The PS3’s architecture made it difficult to get a graphics pipeline going; the PS4’s CPU wasn’t as powerful as folks hoped. The PS5, Cerny says, has found miraculously little pushback.
Now, six months after launch, a new phase of PS5 games has begun: titles that are leveraging the console’s capabilities to push forward.
First was Returnal , a console exclusive from Housemarque, the same studio that created PS4 standout Resogun.
The creepy roguelike shooter received raves for its inversive narrative techniques and atmospheric gameplay—gameplay that tapped into the PS5’s 3D audio and haptics like nothing before it. When players run through an overgrown biome on a hostile alien planet, the raindrops somehow feel like they’re coming through the controller itself. Aiming your weapon at an attacking creature is a two-part process: Your trigger stops halfway to use your usual sidearm, and depressing it more unlocks the weapon's secondary function. ( Astro’s Playroom , a cute platformer from first-party Japan Studios that came preinstalled on the PS5, shows off the DualSense’s haptics as well, but it functions as a tech demo as much as a game.) Courtesy of Insomniac Games Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Housemarque isn’t even a first-party developer; what’s coming from the collection of companies under the PlayStation Studios umbrella bears out the breadth and depth of the pipeline. In June, Insomniac will follow up Miles Morales with Ratchet and Clank: Rift Apart , the newest installment in its long-running platform shooter franchise. The SSD’s impact is on full display there, allowing players to walk through dimensional portals and emerge nearly instantly into an entirely other level. Later this year, Guerrilla Games will release Horizon Forbidden West, a sequel to its massive 2017 hit Horizon Zero Dawn.
While Forbidden West will be available on the PS4 as well, Guerrilla studio director Angie Smets credits the DualSense’s haptics capabilities with allowing special sauce well beyond fast travel and graphical fidelity. ”If you want to take a stealth approach to a combat situation and you dive into long grass,” she says, “you can feel those long grass leaves.” According to Hermen Hulst, a Guerrilla cofounder whom Jim Ryan tapped to lead PlayStation Studios in 2019, the group has more than 25 titles in development for the PS5—nearly half of which are entirely new IP. “There’s an incredible amount of variety originating from different regions,” Hulst says. “Big, small, different genres.” And in many of those cases, Sony’s shared services became a lifeline for studios navigating lockdown. Having moved all its employees home in early 2020, Guerilla Games found itself staring down the barrel of a game that hadn’t even finished its voice and performance capture, let alone play-testing. For the audio, Guerrilla shipped recording booths to the voice actors’ homes. Performance capture was tougher, since it couldn’t use its usual facilities in California, but last summer the studio moved into a new Amsterdam space they’d designed to have a motion-capture stage; that, plus some very careful hygiene, allowed them to get what they needed. And the play-testing? Well, it’s a good thing Sony had invested in cloud gaming for its streaming service PlayStation Now. “Seeing that first play test using PlayStation was a huge relief,” says Smets. “Knowing that, ‘OK, great, we can continue.’” Indies are getting in on the fun too. Haven Studios, a new venture from industry veteran Jade Raymond, has partnered with Sony for its next game, as has Firewalk Studios for an unannounced multiplayer title. Ember Lab, an animation and digital studio, is releasing its first major game, the Zelda-esque Kena: Bridge of Spirits , in August; while the game began its life as a PS4 title, Sony encouraged cofounders (and brothers) Josh and Mike Grier to make it available on both. Now, the pair is excited not just about what they’ve been able to do with the added horsepower—more characters onscreen, 3D audio, using the haptics to make the protagonist’s bow and arrow feel as lifelike as possible—but what they’ve learned for when it comes time to make their next title. “Our groundwork was on the PS4,” says Josh Grier. “But looking at game two, focusing on taking advantage of the SSD and building mechanics and tools around that, will be really fun. I know for sure we haven't fully taken advantage of how actually fast it is—we were getting a lot of benefits of it being just out-of-the-box better. But I think you can push it even more.” Save for Returnal , of course, all that’s in the offing. But just because the best might be yet to come doesn't mean PS5 owners haven't been using it so far: According to Sony spokespeople, from launch through the end of March, users logged 81 percent more time on the console than on the PS4 during its own comparable period in 2013-14. Similarly, 11 percent more game units have been sold during the PS5's first five months than on its predecessor. Is some of that thanks to increased home time? No question. Across the PlayStation ecosystem, user gameplay time was 20 percent higher in March 2021 than the same month in 2019. There's some irony, of course, in the fact that the thing driving usage is also in part driving shortages. "What you’re seeing is the high demand that we’ve seen during the pandemic coming across all aspects of consumer electronics," says Carolina Milanesi, president and principal analyst at Creative Strategies. "Televisions, consoles: We were just online more than we ever have been before." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But it also speaks to the strength of the offerings—even if they represent earlyish explorations of the PS5 feature set. Many of the cross-platform titles performed better on the PS5 than they did on the Xbox Series X, for instance. And for the folks inside the company who have been playing preview builds, there’s a feeling of deep satisfaction about what’s to come. “I spent some time yesterday with Horizon Forbidden West for the first time in seven or eight months,” says Hermen Hulst, who had been involved with the game in its early years before leaving Guerrilla to head up PlayStation Studios. “To step away and to come back to it? Talk about giving me a gift.” 📩 The latest on tech, science, and more: Get our newsletters ! The cold war over McDonald's hacked ice cream machines One thing Covid didn’t smash to pieces? Monster movies Sharks use the Earth’s magnetic field like a compass It began as an AI-fueled dungeon game.
It got much darker Hackers used to be humans. Soon, AIs will hack humanity 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Contributing Editor X Topics Playstation Sony video games xbox Boone Ashworth Megan Farokhmanesh Megan Farokhmanesh Reece Rogers Angela Watercutter Saniya Ahmed Amit Katwala Jennifer M. Wood Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
107 | 2,012 | "Review: Sony PlayStation Vita | WIRED" | "https://www.wired.com/2012/01/sony-playstation-vita" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Mark Anderson Gear Review: Sony PlayStation Vita Facebook X Email Save Story Photo by Jon Snyder/Wired Facebook X Email Save Story $250 at Sony If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED Rating: 7/10 Open rating explainer The most distinguishing feature of the PlayStation Vita , Sony's new portable game machine, may go unnoticed at first glance.
The thing Sony is banking on occupies a tiny little area on the face of the unit, roughly a square centimeter in size.
Do you see it? It's the analog joystick, the one on the right side, sitting just beneath the familiar PlayStation buttons. This may seem a relatively minor distinction, but it's the thing that furthest separates the Vita from the Nintendo 3DS , iPhone , Kindle Fire , or any other self-contained gaming platform. Two analog sticks means hardcore gamers don't have to compromise; they can play Uncharted: Golden Abyss on a Vita the same way they'd play it on a PS3, using one stick to move and another to aim.
Absent some hypothetical and unlikely Nintendogs -style killer app, Sony doesn't have a prayer of selling the $250 Vita ($300 with 3G connectivity) to the sort of erstwhile gamer who is perfectly happy playing on a tablet. So the company's strategy would seem to be to double down on the hard-core crowd by aiming at the sort of person who feels anything without sticks and buttons barely qualifies as a videogame in the first place.
Two analog sticks means hardcore gamers don't have to compromise; they can play Uncharted: Golden Abyss on a Vita the same way they'd play it on a PS3, using one stick to move and another to aim.To that effect, the Vita works very well. The beefy processing power, stunning OLED display and console-like controls can come together to produce experiences like Uncharted that feel like miniaturized home games. The open question is whether software makers will want to invest the time and money into crafting exclusive Vita games that take advantage of all that capability.
The Vita will be released in the United States on Feb. 22. Wired got its hands on a Japanese unit, released in December, for this early review. (I tested the Wi-Fi version, as a Japanese 3G plan wouldn't do me any good here.) Your PlayStation Vita won't exactly be going into your pocket. Its size is somewhere between a Nintendo 3DS and a smallish tablet. But it's comfortable to play for extended periods of time, in large part because it's so wide and flat. The sticks aren't like the sliding pads of the PSP or 3DS; they're joysticks that tilt. The power adapter can split apart into a USB cable for charging or transferring data, although you have to use the included proprietary cable to plug in the device since there's no mini-USB input.
runMobileCompatibilityScript('myExperience1404961982001', 'anId'); brightcove.createExperiences(); The Vita may be chasing after established gamers, but it's hardly a traditional machine. Sony has (finally) bowed to the pressures of the market and added touch sensitivity to the screen. You use your finger to interact, not a stylus as with Nintendo's machines. The menu screens all use touch: You swipe the screen to unlock it, touch to scroll through the icons on the menu, and tap them to open the software. Vita's UI has a lot of cute little touches that make it fun to play around with. If you try to scroll the icons too far, they'll stretch and bounce. Unlocking the screen or closing apps is done by peeling a "sticker" off the screen by swiping from a detached corner.
Sony Ericsson PlayStation Vita Rating: 7/10 $250 at Sony If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED There's another control option: Most of the rear of the unit is a large touch-sensitive panel, so you can control games by swiping your fingers along the back of the Vita. Sony seems to have added this without any sense of what it would be used for, though, and I haven't played any games yet where it really changes things. The closest anything has come is Uncharted , which allows you to zoom your sniper rifle in and out by moving your finger up and down.
Another way the Vita breaks with tradition is shopping. As with other game consoles, the Vita will have games available on cartridges in stores and as downloads online. The difference is that every cartridge game will also be available on the online store the same date as its retail release, usually a little bit cheaper. So if you don't want to carry around a bag full of game carts, you don't have to. Of course, since games run about $30-50 online, they're hardly impulse purchases. Demos for some, but not all, games are available on the digital store.
>The device has no internal storage – at least none that can be used for saved game data or downloaded content. If you want to download anything, you have to buy a Vita memory card.
There is a catch, of course. Sony's penchant for proprietary accessories has led to one of the Vita's biggest flaws. The device has no internal storage – at least none that can be used for saved game data or downloaded content. If you want to download anything, you have to buy a Vita memory card, which range from $20 for 4GB up to $100 for 32GB. You can't use existing cards. Vita cards are unique to the platform. Even worse, many cartridge-based games (including, yes, Uncharted ) won't even boot without a memory stick.
It's easy to back up data to a PC or PS3, though, using a Content Manager application. If you're having bad flashbacks of SonicStage right about now, rest assured it doesn't require a garbage piece of bloatware to function. The PC program is a small application that is used to assign four folders on your PC for the PS Vita to access (music, movies, photos and games), and you do all of the data transfer using simple software on the Vita itself.
There are a lot of other built-in functions – the usual stuff like friend messaging and a "party" function that lets you group a bunch of people together. The most unique one is Near, an app that shows you information about other gamers in your immediate area who also have a Vita. You can see what games they're playing and send them a friend request. This doesn't work as well as it should right now – you're supposed to be able to trade virtual in-game items with each other, but Near is telling me I have to share my online ID to do this. I am sharing my ID, but it doesn't think I am.
Sony says a Vita should get about four or five hours of gaming time out of a battery charge, and this jibes with my experience. I've had mixed results so far when leaving it in sleep mode: at one point I left it for a few days and returned to find it still had some battery, but the next time I abandoned it for a while it totally drained itself and I had to charge it for ten minutes before I could even turn it back on again.
Another limitation: You can only have one user account per system, so you can't share a Vita with other people who have PlayStation accounts, nor can you create, for example, a Japanese account to download games from other regions.
As a PlayStation device, the Vita's success or failure will ultimately be the result of its game library. I don't have enough visibility to determine what its software calendar will look like past launch day, but as of now, it's enough to say the Vita is a capable piece of hardware with a user interface that runs laps around the PSP's, just with a few potentially irritating flaws.
WIRED Big, bright screen. Console-quality graphics. Comfortable to hold and play at length. Dual analog controls. Touch-based user interface is a winner.
TIRED No internal storage. Requires expensive proprietary memory sticks (even for some cartridge games!). Only one account per system allowed. Expensive games with no cheap options (yet).
Sony Ericsson PlayStation Vita Rating: 7/10 $250 at Sony If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED $250 at Sony Topics Playstation Sony video games Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
108 | 2,023 | "Watch Every Style of Beer Explained | Each and Every | WIRED" | "https://www.wired.com/video/watch/each-and-every-beer" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Every Style of Beer Explained About Released on 08/18/2020 When you look at the majority of the beer produced around the world today, and a lot of it falls into the categories of things like American lager, American light lager, international pale lager.
All of those styles are descended from this one beer.
[rock music] Hi, I'm Pat Fahey, master cicerone and content director for the cicerone certification program.
And this is each and every beer style.
In the wine world, experts are known as sommeliers and are certified by the court of master sommeliers.
The Cicerone certification program serves a similar purpose in the beer world, educating both professionals and enthusiasts.
One of the most important topics that we cover in all of our materials is beer styles.
And the style guidelines are usually created primarily for the purpose of judging beers in a competition setting.
So we're going to work within those guidelines.
So we're going to be covering a lot of different classic styles, but we'll also talk about some of those variations at the end that brewers use to produce the wide landscape of beers that you see today.
I broke those styles down into eight different groups, primarily based on the flavors that you find in those beers.
So we're kicking things off with multi lagers, we've got 19 of them to cover.
We're gonna see a huge range of different types of malt flavor.
Malt flavor is commonly described as like bakery flavors, because a lot of the sorts of flavors that you get in different types of malts are things that you might see in different types of baked goods.
So on the pale end, you might see things like bread dough or crackers, or like freshly baked bread.
And as it ranges and gets darker and darker, we go through flavors like caramel, nutty, toffee, on into chocolate and espresso, and really, really dark malty beers.
We're gonna see kind of the whole range of those flavors here.
And we're gonna find that the color of many of these styles really dictates the sorts of multi flavors that we're gonna see.
So during that final phase of the malting process, the grain is first dried and then the maltser will apply some amount of additional heat to the grain to sort of determine what flavors they want that grain to have.
So to kick things off in the multi lagers group, we're gonna be talking about American and international lagers, and we're gonna start with a style that's one of the most widely available beer styles in the world.
That's American light lager.
Think of bud light, Coors light, Miller light.
It's a style that's relatively light in its overall flavor profile.
American lager was developed before the light lager style came into being.
Light lagers kind of came around in like the mid 1970s.
But at this point, the light lager style is a lot more popular of the two of them.
They both share a lot in terms of production processes, though, a lot of times they're gonna be brewed using either rice or corn in addition to barley.
And that addition of rice or corn is used to lighten both the color, the malt flavor and the body of those beers.
One of the reasons that both American light lager and American lager have been so successful is that they don't have a lot in the way of identifiable flavors going on.
They're relatively inoffensive beers designed to appeal to a broad swath of people.
The next subgroup is a family of international lagers and they're differentiated based on their color.
We have international pale lager, international Amber lager and international dark lager.
International pale lager is far and away the most popular member of the group think beers like Heineken, Corona, Peroni, Asahi, Super Dry.
Now these are beers that look a lot like American lagers.
They're usually a little bit beefier.
They're oftentimes not gonna be made with corn or rice.
They could be all malt beverages, which is gonna give them a little bit more body and a little bit more flavor.
They also are potentially bitter to a slightly higher level, but still very approachable beers, easy drinking, widely enjoyed by people all around the world.
So both international Amber lager and international dark lager are not seeing quite as often for international Amber lager maybe the most common example you'd see would be Dosickeys Amber.
On the international dark lager side, you're looking at something like a Shiner Bock.
International Amber lager, usually gonna have a little bit more malt character and to get that Amber color.
And that's gonna give you maybe just a touch of Carmel or toast flavor.
International dark lager, on the other hand, you might expect that that would have even more malt character, but a lot of times that color is going to be the result of some caramel coloring.
Oftentimes just like a colored version of an international pale lager.
Then the last member of this sort of American lager family is a beer called cream ale, which as the name implies is not actually a lager.
Kind of snuck it in here in part just because the flavor profile is so similar to what you see in American lagers.
Cream ales are often formulated a lot like a normal strength American lager.
They're just going to be fermented with ale yeast.
And typically it's gonna be done in a way where you still don't get very much in the way the fermentation flavor.
The next group that we're gonna cover is a group of multi European lagers that are all kind of normal ish alcohol strength, like the four and a half to five and a half percent alcohol range.
And while the last group was a little bit tighter in terms of the flavors that we saw from the different styles, this one is gonna cover a broader range of different malt flavors and characteristics.
First up, we've got Munch Helles, a pale lager style from Germany and with German styles, knowing just a little bit of German can honestly help you understand a fair amount of the basics of any style.
You look at the name of this beer, Munich means a beer from Munich Helles means light or pale in color.
So any beer you see from Germany that has like hell or Helles in the name is gonna be a pale beer.
Munich Helles today is the most popular everyday drinking beer that you find across Bavaria.
You see people hoisting huge steins of pale golden beer.
It's usually gonna be a leader of Helles.
It's just an awesome drinking beer.
Next up is Kellerbier, Keller means seller.
And it's a reference to the fact that these beers are oftentimes just finished fermenting.
So you get a beer that's a little bit younger and is typically unfiltered when it's served.
There are different variations of the style, but one of the more common is just a pale Keller beer, which is basically an unfiltered version of Munich Kelles.
The next two styles Marzen and Festbier are very closely tied together and they're related to kind of the traditional Oktoberfest celebration.
So back in the early 1800s, a crown Prince Ludwig was getting married.
They brought a lot of people together prior to this wedding and through just a huge ragger of a beer fest, people had such a good time doing it that they're like, you know what? We should do this every year, forever.
And that became Oktoberfest.
Originally, the beer that was served at Oktoberfest was this style of beer called Merzten, Merzten means March like the month of March.
And this was a stronger, somewhat dark beer that was brewed in March, typically would have been cellared throughout the warmer months of the year, and then enjoyed in October or late September.
So it naturally sort of sinked up with this Oktoberfest celebration.
However, over time, consumer tastes have sort of shifted to favor beers that are a bit paler in color.
So today when you see Oktoberfest's beer being served at the Oktoberfest celebration, they're usually pretty pale in color.
That style is usually referred to as Fest beer.
The Vienna lager style was developed in a similar timeframe as Merzten was kind of like the early to mid 1800s.
And as honestly a pretty similar beer in terms of its makeup, it's maybe a little bit lighter in color and a little bit less malty balanced, slightly more towards bitterness, but otherwise it's a pretty similar beer style.
The Vienna lager style was developed around the same time as the Merzten style, just in Vienna instead of Bavaria, ironically enough Vienna lager doesn't really exist in Vienna to an appreciable extent these days.
You're far more likely to find it in the US or even in Mexico.
Next up is Munich Dunkel.
And going back to sort of our translation of German words, Dunkel is the German word for dark.
Any beer that you see that's labeled as a Dunkel or has Dunkel somewhere on the label has kind of darker toastier pretzel like bread crust, kind of malt flavors to go along with it.
The easiest way to think of Munich Dunkel is kind of as a dark version of a Munich Kelles.
Historically Dunkel actually came long before Munich Kelles did.
When you look at beer history in the grand scheme of things, really like pale golden beer is a relatively recent invention.
The first widely available pale beers didn't come into being until early to mid 1800s.
Prior to the 20th century, Munich Dunkel would have been that every day, drinking beer for the citizens of Bavaria.
Continuing with our lesson on German color words, Schearz, is the German word for black.
So Schwartz beer literally translates to black beer.
It's basically just a darker version of a Munich Dunkel.
Usually not gonna see like tons of overt, heavily roasted character.
Like you wouldn't have stout things like coffee and espresso, but it'll have kind of additional light chocolate flavors.
In addition to the toasty notes that you find in a typical Munich Dunkel.
The final pair of this group are a couple of multi lagers from the Czech Republic.
We have Czech Amber lager and Czech dark lager.
Both of these styles are honestly pretty difficult to come by outside of the Czech Republic, and even can be a bit hard to find if you're in the Czech Republic, but if you are able to get your hands on them, they're delightful beers.
Both of these Czech lagers combined sort of a rich malt profile with a pronounced level of bitterness and a bit of spicy hop aroma.
The last subgroup of this multilocular family is the Bock beers, which itself is a group of four higher strength, German lager styles.
Now our German language skills do fail us a little bit here.
The direct translation of Bock is goat, and that's not really where the name comes from.
The name is thought to be basically a linguistic corruption of beer from the city of Einbeck.
However, Bock does carry a legal connotation in Germany, beers that are labeled as Bocks are usually going to be a bit stronger than your average beer.
At the very least usually gonna be about 6% alcohol, but some of the members of the Bock family can range all the way up to 14% alcohol.
The first member of the family we're gonna talk about is Dunkels Bock, as we already talked about Dunkel means dark.
So this is basically a strong dark lager.
Today, it's one of the least seen members of the family.
Next up is Helles Bock, which is a pale strong lager.
A Helles Bock also sometimes referred to as Mai Bock, Mai translating to the month of may and Helles Bock, or Mai Bock is a common spring seasonal beer in Germany.
Helles Bock looks a lot like a stronger version of a Munich Kelles, though it does usually feature a little bit of hop flavor and aroma, but given it's rather high strength, it's actually a pretty refreshing beer.
Doppelbock today is essentially a higher strength version of a regular Dunkel's Bock and of all of these Bock sub styles.
It's the most widely available style.
The original Doppelbock beer was a beer from the pollener brewery in Germany known as Salvator.
They still make that beer today.
To this day, a lot of breweries name, their Dopplebock's with the suffix A-T-O-R to sort of imply a connection to that Salvator style.
So you'll usually see that as part of their name.
Final style of the Bock family is Eisbock and conveniently enough Eis, E-I-S in German translates to ice in English.
Basically Eisbock is a Doppelbock that has been frozen to concentrate its flavor.
It's water freezes at a higher temperature than alcohol does.
So as you cool beer down, you'll be able to remove some of that frozen water, leaving you with a more highly flavored, more intense high alcohol product.
Following this freezing process, you're left with an incredibly potent beer.
Eisbok's can range all the way up to 14% alcohol.
And they oftentimes will feature kind of like dark fruit or dried fruit notes.
Things like Raisin, prune or fig even.
This group of hoppy loggers isn't broken up into multiple sub families.
All of these styles are sort of derivatives of the original Pilsner style and that Pilsner style is unquestionably one of the most influential beer styles in the history of beer.
So the first style that we're going to talk about is Czech premium pale lager.
The style name for this beer used to be Bohemian Pilsner, but was actually changed recently for a few reasons.
One Pilsner is a German word, so it doesn't really make sense for the Czech style to carry a German name, but two in the Czech Republic, Pilsner is not a style of beer.
Pilsner is a brand.
You may have heard of the beer Pilsner Urqell.
That is this original Pilsner style, but in the Czech Republic, that's the only beer that carries the name of Pilsner.
Now Pilsner Urqell was first brewed in late 1842, and upon appearing on the scene, it totally transformed beer across all of continental Europe.
The beer combined, several kind of unique or novel characteristics.
It was brewed with very soft water from the town of Pilsen, which allowed for a beer that had a higher than normal level of bitterness, but kind of like a soft pleasant quality of bitterness that people really enjoyed.
It also was one of the first, truly pale golden beers produced, leveraging new advances in malting technology that for the first time allowed pale malts to be produced affordably on a large scale.
When you look at the majority of the beer produced around the world today, a lot of it falls into the categories of things like American lager, American light lager, international pale lager.
All of those styles are descended from this one vehicle.
Continuing on a similar theme, we have Czech pale lager, which is kind of like a lighter version of the check premium pale lager.
It's a little bit lower in malt flavor, a little bit lower in body.
A little lower in alcohol content.
The German Pils style is probably the closest direct descendant of the Czech premium pale lager style.
This was basically German brewers copying the Czech Pilsner style to try to produce a similar beer using the ingredients that they had available to them.
At this point today, you're far more likely to find brands of German Pils than you will, of brands of Czech premium pale outside of Pilsner or Kell.
On the whole, German Pils is a really refreshing and crushable style for those of you not acquainted with the term crushable is like drinkable, but better.
The first iterations of the American lager style, which is so prevalent today were made by German immigrants to the US in the mid 1800s.
So like Czech premium pale lager very directly gave rise to the German Pils style, German Pils, very directly sort of birthed the American lager style.
Next up is German leicht beer, which is basically German light though, compared to our American light beer, it's a significantly more assertive beer.
German Helles export beer falls somewhere between a German pils and a Munich Helles.
Typically has a little bit more bitterness than a Helles would, but a little bit more body than you'd expect to see in a pils.
Many of the classic examples of this style come from the city of Dortmund and the style actually used to be called Dortmunder export.
Kolsch is another example of an ale that I snuck into a lager category, but because of the way that the fermentation is handled, Kolsch usually doesn't have tons and tons fermentation character.
A lot of times it's described as sort of like the ale version of a German pils.
The Kolsch style comes from the German city of Cologne.
The German name for that city is Koln, and Kolsch literally means of Koln.
The service tradition of the style in the city of Colgne is incredibly unique and a lot of fun if you've ever been or you ever get the chance to go.
It's just an awesome experience.
Kolsch is served in these little rod shaped glasses.
They're 200 milliliters so just over six ounces, which obviously does not take very long for you to drink through.
But the way that it's served is that servers run around with trays full of Kolsch.
And as soon as you finish one glass, they replace you with a fresh glass and just make a tick on your coaster and they just keep going and going and going until you put your coaster over the top of your glass, telling them that it's time to stop, even though Kolsch is only moderate in alcohol content like four and a half percent.
And it's served in small glasses, given the ease of consuming them and the rate at which they're replaced.
It's very easy to be conversing with friends and looked out and see like 10 or 15 tick marks on your costers.
Very fun, somewhat dangerous, but always a good time.
Last in the group is pre-prohibition logger, which is a historical style that you don't see a lot of commercial examples today, but it's kind of an interesting style because it represents the link between German pils and American lager.
Pre-prohibition lager approximates the beer that German immigrants would have been brewing when they first came to the US in the mid 1800s.
And were trying to recreate the German pils style using ingredients that they had available to them here.
Now, prohibition through the prohibition of alcohol consumption, obviously put a lot of breweries out of business, which in and of itself would have been pretty bad for the beer industry, but it came at a pretty bad time.
Around the same time of prohibition, we had a couple of world wars.
We had the great depression.
So all of those sorts of events together conspired to really decimate the brewing industry.
We went from a place where there were thousands of breweries in the country to having less than a hundred.
By the time you got to like the 60s and 70s.
As a result of all of these events, what we saw happen was beer before prohibition was a lot more varied.
There were a lot more different styles, a lot more flavorful beers, beer beyond like World War II.
We were kind of in American lager territory where most beer tasted exactly the same.
There wasn't a lot of variety.
It wasn't a very exciting thing to drink.
It was just a commodity product.
In addition to prohibition that really shaped that was just kind of like the general trends and consumer products that occurred in the 50s.
When you had a lot of people moving from urban centers to suburban areas, a lot of people spending less time in bars and more time at home.
It's kind of like the TV dinner era.
And so at that same time across consumer products, not just with alcohol, you had a move from smaller batch, more kind of like mum and pop products to like mass produced homogenized products and happened in every industry.
You look at toothpaste, like there's how many toothpaste brands can you name? How many laundry detergent brands can you name? There are just a few and they're all relatively similar.
What really impressed people in the 50s was not this is a really flavorful thing.
They're like, it's really cool that I can go anywhere in the country and I can get something that's exactly the same.
So it was kind of that mentality that shaped all consumer products in that era.
And it was more kind of a move in food and beverage and eventually other things towards artismal products.
You can point to Starbucks as sort of the rise of people going from coffee is just the thing I drink for caffeine to like coffee is the thing I drank for certain flavors.
That sort of lined up with people getting into craft beer or craft spirits, or all of those things, beer developing when it did was a broader dynamic of people wanting variety in all of the products that they were looking at.
Due to the way that ales are fermented, it typically will show some amount of flavor derived from their fermentation.
Usually a little bit of fruity character.
All of the beers in this group lead with malt flavor though some of the beers also have significant levels of either hot flavor or fermentation derived flavor.
To structure this group, we went with beers that range from kind of pale to brown in color.
First style here is dark mild, which is sort of like your classic multi British pub beer, a very highly sessionable beer, which is a term that's used to describe beers that are lower in alcohol that you can drink several of over the course of a session.
Goes great alongside a lot of pub fair.
Like I love drinking dark mild with classic bangers and mash.
British brown ale is balanced a lot like a dark mild though it's a bit stronger in strength maybe have around four to 5% alcohol content.
Think something like a Newcastle brown ale.
Also in the family of round British ales, we have the London brown ale style.
It's at this point considered a historical style.
There are very few examples of it available.
English Barley wines are often considered to be a malt showcase, with really robust notes of caramel, toffee, but also like molasses trickle, maybe some dark fruit character like plum, prune or fig.
They are really robust and interesting beers.
In most cases, barley wines are going to be the strongest products produced by a given brewery.
And they're usually also going to be vintage dated.
In part, because due to their high alcohol content, they can age pretty well.
Next step, in this category, we have the sort of generic British strong ale style, which serves as sort of bit of a catch all for a lot of higher alcohol English, malted beers.
Compared to barley wine, it's usually going to be a little bit lower in alcohol in sort of like the six to 8% range, rather than eight to 12% like barley wine.
Last among the British malted ales, we have the old ale style, which is similar in strength to the British strong ale, typically features some amount of aged character, which can go in a lot of different directions.
However, like British strong ale.
This is a style that allows for a pretty wide range of interpretations.
It's not the most popular of styles.
So you don't see tons of examples of the style on the market these days.
We've got five different styles that fit in the subgroup of Scottish and Irish, malted ales, and the first three of them, Scottish light, Scottish heavy and Scottish export are all very, very closely related.
These three styles are very similar in their flavor profiles and are primarily just separated by different levels of alcohol content and consequently different levels of intensity.
The light is usually somewhere between 2.5 and 3% alcohol.
Scottish heavy will be maybe three to 4%.
And Scottish export might be four to 6% alcohol.
Scottish light and heavy are rather challenging to find outside of Scotland and honestly are even somewhat difficult to find in Scotland.
The most commonly available beer of this style that you'll see out on the market is Belheven Scottish ale, which is a classic example of the Scottish export style.
Then we have, Wee heavy which is kind of like an amped up version of the other Scottish ales.
Wee heavy has similar flavors, but can range from like six to 10% alcohol.
So the flavor is a lot more intense.
One thing that is notable about these four Scottish styles, some people sometimes think that they should be made with a peated malt.
Peat is kind of that like really intense, smoky character that you get in certain types of scotch whiskey.
However, peated malt is used exclusively in whiskey production.
It's not typically used by Scottish beer brewers.
So those sorts of flavors are not appropriate in these styles.
Heading over to Ireland, we have Irish red ale, which is sort of a light refreshing Irish ale.
A lot of beers that feature sort of a reddish hue are often going to be produced with a specific type of malt known as caramel malt.
Irish red ale sometimes we'll use caramel but more often will get its red color from a very, very small amount of roast barley.
The ingredient that makes beers like Irish stout, black and heavily roasting character.
We have four different American malty ale styles that cover a pretty wide range of different characteristics.
And we're starting off with American blonde ale.
As the name implies, American blonde ale is a golden pale colored beer.
American blonde ale is meant to be a really approachable beer.
And in a lot of cases serves as sort of a transitional beer for people moving from beers like American lagers into beers that have a bit more flavor.
From a flavor perspective, American wheat beer is pretty similar to American blond ale.
So as the name implies, it's made with some amount of malted wheat.
Once again, this is relatively straight forward beer is a popular style with a lot of early American craft breweries as a way to move people beyond American lager beers that they might have been more familiar with.
In the 90s, a lot of pioneers of the American craft beer movement had flagship beers that were of the American wheat style.
You can look at Goose Island with 312, Widmer with their half of whites and there's two really prominent examples.
For a pretty dramatic change of pace.
We jump to American brown ale.
American brown ale is sort of like the American take on the British brown ale style.
In a lot of times when American brewers adapt a style from somewhere else, they're going to make a beer that is usually more aggressive in some way.
In my opinion, American brown ale is a pretty under appreciated style.
I have a pretty deep love for the style.
It's makes for a really fantastic companion with a wide range of different foods.
I wish that there were more of them out there.
With wheat wine we have a unique specialty beer that's kind of made in the same vein as barley wine.
Though it also includes a pretty significant portion of wheat malt.
The style is rather high in alcohol and drinks, kind of like an amped up version of an American wheat beer.
Next step, we've got a pair of malty ales from continental Europe, one from Belgium and one from France.
Most classic Belgium beer styles are dominated by yeast derived flavors, but Belgium pale ale is pretty malt driven.
It doesn't have as much yeast character as most other Belgian styles.
Honestly, Belgian pale are kind of hard to come by these days.
There are two main producers in Belgium, Daconic and Palm, and those are two of the best and potentially only examples of the style that you can find.
Biere de garde is a French specialty.
It's produced in the Northeast of France, kind of along the border with Belgium.
And it's an interesting style in that it's produced in three different color bands.
You have blonde biere de garde, Amber biere de garde and brown biere de garde.
Kentucky common as a historical style.
It's very rarely seen commercially today and the beer drinks a lot like a dark version of a cream ale.
And then last in the category we have Sahti.
Sahti is a very unusual finished style.
And I actually had a hard time pinning exactly where I wanted to put this one because it has so much going on in it.
The driving flavors of the style, the most prominent one being Juniper.
Sahti uses Juniper berries as a flavoring, and the other place where people might be familiar with the Juniper flavor is as sort of the main flavor in gin.
So Sahti has kind of a gin like Piney, herbal, floral character to it that really drives the flavor profile of the style.
Next up, we've got a slate of 12 roasty dark ales.
This group covers a number of different porters and stouts.
All of which are brown or black in color and feature some amount of roast flavor.
Originally stout grew out of Porter as a stronger version of Porter, but today that's not necessarily the case.
English Porter was a tremendously popular style in the UK, in the 1700s and had kind of a unique production process.
At the time, it was a beer that was made as a blend of both young and old ales.
So some of the beer would have been aged in large wooden vats where it would develop acidity and kind of like funky characteristics.
Then that would be blended with younger beer to produce the finished Porter style.
English Potter today does not reflect that process.
It's just kind of like a dark ale style.
Every Porter and stout, some stout currently in existence can be traced back to this one style of beer.
The majority of the beer made in the world is made with four specific ingredients, malt, hops, yeast, and water, and through using different varieties of those ingredients and manipulating the way that those ingredients are used.
Brewers are able to achieve a tremendous variety of different flavor profiles in their beers.
Today in a lot of modern styles, brewers may augment that list of ingredients.
Perhaps adding things as run of the mill is like chocolate or coffee or certain fruits to really weird ingredients like lobsters or zebra mussles or other shellfish.
By and large, the majority of beer achieves this wide palette of flavors using just those four ingredients.
The heyday of Porter in the UK was very much the 1700s.
It sort of declined in popularity over the course of the 1800s, and then basically died out in the 20th century.
However, the Porter style was sort of resurrected by American craft brewers, looking for styles to experiment with.
Baltic Porter is another take on the Porter style in this case, brewed in countries that sort of surround the Baltic sea.
Some of the more prominent commercial examples come from like Sweden, Russia, and Poland.
Baltic Porter stands out of this group in that it's actually usually made as a lager rather than an ale in part due to the colder climates of the countries where this beer is typically produced.
At the time when this beer was originally being made, it would have been easier for them to do a lager fermentation than an ale fermentation.
Baltic porters will usually be anywhere from six to 10% alcohol.
Lastly, we have pre-prohibition Porter.
Pre-prohibition Porter is another historical style, not very widely available today.
Honestly, hard to find in commercial settings.
It's a recreation of what Porter might've looked like around the time of the revolutionary war in the US and at that point, it was very much an American brewers take on the English Porter style.
In the mid 1700s, when Porter was extremely popular, brewers didn't have a lot of options available to them when it came to making different styles of beer.
Ingredient availability was an issue.
You would have maybe one or two different malts to choose from.
As a result, the primary way for brewers to expand their offerings was for them to brew beers of different strengths.
Basically to use different amounts of ingredients in the beers that they brewed.
Stout grew out of this tradition as a stronger version of Porter.
And all of these different levels of stouts were basically differing alcoholic strengths of beers that resembled Porter.
Today the key differences between each of the stouts sub styles often comes down to the balance of the beer and the strength of the beer.
First up in the stout family, we have Irish stout, which is probably the best known stout sub style as a result of the widespread popularity of Guinness draft.
I actually think that Irish stout is one of the most misunderstood beer styles out there.
People look at this beer and they see that it's dark.
And so there are a lot of assumptions that come with that.
People think that because it's dark, it's gonna be full bodied and high in alcohol and assertively flavored.
And basically none of those things are true.
And as a result, it's actually like a pretty easy drinking beer.
Irish stout gets it dark color, and it's heavily roasted flavor from the use of roasted barley, which is kind of the signature ingredient in that style.
Another unique aspect of the Irish stout style is that it's often served on nitro, which means it's served using nitrogen instead of just carbon dioxide.
The inclusion of nitrogen is what creates that sort of cascading bubbles effects that you see when a beer like Guinness is poured, and it also has some pretty distinct impacts on the flavor experience of the beer.
It ends up reducing the perceived bitterness of the beer and also gives it kind of like a smooth creamy texture, just because you encounter that really creamy head first, when you drank the beer.
The next two styles, Irish, extra stout and foreign extra stout are rather similar to Irish stout in terms of their balance.
The main differences here come down to the alcohol content and consequently, the sort of the overall intensity of the style.
Irish extra stout is gonna be a little bit stronger than an Irish stout.
Maybe five to 6% in alcohol.
Foreign extra is stronger still, maybe six to 8% alcohol.
The tropical stout sub style is descended from some of those export type stouts that would have been sent to tropical locations, such as the Caribbean, or even like parts of India.
The style is similar in strength to a foreign, extra stout, but the balance is pretty different.
It's usually a lot sweeter.
American stout is the American take on the foreign extra stout style.
And like most American interpretations of styles, it got made a little bit more intense, tending more towards kind of that burnt ashy, robust espresso flavor.
Imperial stout is the strongest of all of the stout sub styles.
The style guidelines say that it can go up to 12% alcohol, but in truth, you see some variations of the style that go higher.
I can think of Imperial stouts up around 15 or even 18% alcohol.
In the late 1700s, these really, really high octane stouts were very popular with the Russian Imperial court as an export beer.
And so a lot of brewers in the UK took to naming their strongest stout beers as Imperial stouts or Russian Imperial stouts.
Interestingly enough, this is where we get the word Imperial from, as it is applied to beer styles.
You'll see, on beer labels, Imperial Porter or Imperial IPA, Imperial Pilsner.
That just means a stronger version of the style.
And that's tied back to kind of the history of Imperial stout.
Usually meant to be a sipper, definitely a beer you can sit down and enjoy over a period of time.
Imperial stouts also are really common candidate for barrel aging, particularly spirit barrel aging.
Some of the first beers aged in barrels were Imperial stouts aged in bourbon barrels.
And that trend is very, very popular today.
Our last two stouts sweet stout and oatmeal stout are sort of moderate strength stouts that are distinguished primarily by the of unique ingredients.
Sweet stout doesn't necessarily have to have anything unique added to it, but oftentimes is going to be brewed with the addition of lactose, in which case it's specifically referred to as a milk stout, the reason why brewers use lactose in these beers rather than any other number of sugars is that yeast cannot ferment lactose.
Yeast are, in essence, lactose intolerant.
By adding lactose that sugar remains in the beer through fermentation and gives you a sweeter fuller bodied finished beer.
Oatmeal stout, as you might imagine, is made with the addition of oats.
And interestingly enough, the oats not necessarily used for their flavor contribution, oats will sometimes give these beers a little bit of a nutty characteristic, but the bigger impact is that usually oats will give beer sort of a luscious velvety texture, which is what brewers are typically after when they use oats to make this style.
The hoppy ale category includes 21 different sub styles and covers a wide range of hoppy ales from various regions around the world.
Now, depending upon how they're used hops can impart either bitterness or aroma and flavor to beer.
Hops grown in different parts of the world have different flavor characteristics.
Our first group of hoppy ales are the British hoppy ales.
And we start things off with English IPA, sort of the beer that kicked off a lot of these other styles.
Now the English IPA style is the original IPA style, and it comes with a pretty widely known story that unfortunately is not terribly grounded in reality.
IPA stands for India pale ale.
And the story goes that when the British were colonizing India, you had people working over there, soldiers, et cetera.
They were very thirsty and beer is being shipped over there, but it was all going bad in transit.
And so the brewers had to develop this hoppy, high alcohol style in order to slake the thirst of all of the people over in India.
However, the real story is a little bit less romantic.
It turns out that at the time brewers were sending all sorts of beer, including Porter and other pale ales over to India, and it was making it there just fine.
In terms of the high hopping rate, brewers at the time knew that beer that was more highly hopped would keep longer.
So anything that was getting shipped to India would be highly hopped due to the fact that it had to survive this long voyage before it was consumed.
And lastly, with regards to the high alcohol content, most India pale ales clock in around six to 7% alcohol.
That's maybe high by today's standards, but the British ales of the day we're often between like five and 10% alcohol.
So they really would have been just like moderate strength beers.
What is true about the story however, is that this pale bitter beer did become very popular in India and eventually became popular back in the UK.
At which point it developed that name, India pale ale.
As has been the case with many of these styles, India pale ales popularity has sort of ebbed and flowed over the years.
So the India pale ale of the mid 1800s is not too close to the IPA that we know and enjoy today.
However, the India pale ale style did spawn a number of pale bitter styles and sort of laid the foundation for most of the pale bitter beers that today exist in the UK and the US.
The next three sub styles are a group of English pale ales known as bitters.
They're direct descendants of the IPA style.
As with the Scottish ales, these three styles are primarily distinguished just based on their alcohol content, going in order from low strength to high strength, you have ordinary bitter, best bitter and strong bitter.
Of the three sub styles, you're most likely to encounter beers in the best bitter sub style.
One of the more prominent examples being London pride from Forsberg.
Lastly, we have the British golden ale style, which is paler in color than the other members of this family.
And so consequently features less malt flavor aroma.
Next, we have a group of three average strength, American hoppy ales.
And we kick things off with the American pale ale style.
American pale ale is sort of the original American craft beer style.
Now in the early days of the craft beer movement or the late 70s and early 80s, brewers were looking for inspiration.
And oftentimes they turned to classic styles made in the UK.
A lot of early American craft brewers learned how to brew from English home brewing texts.
So doing like others before them had done, like we saw with German brewers trying to recreate Pilsner in America, we have American brewers trying to recreate English bitters, but using American ingredients, classic American hops tend to feature a lot of citrus flavor notes like grapefruit and Tangerine, as well as some kind of like piney resiny sorts of flavors.
It doesn't get quite as much love as it deserves these days, but like it's such an awesome and amazing style.
And still what I go back to all the time.
The American Amber ale style is basically a slightly darker take on the American parallel style.
Typically brewers would use a little bit of caramel malt in making this beer.
So you get a little bit of kind of caramel toffee malt flavor, and then the hops are usually dialed back just a hair, but otherwise it's very similar beer to American pale ale.
Lastly, we have the California common style, which is a rather unique American innovation exemplified by the anchor steam brand.
Originally this beer was known as steam beer and in California in the mid 1800s through the early 1900s, there were a vast number of different breweries making steam beers.
However, they all died out one by one.
And in the end, only the anchor Bruin company was left standing.
At which point they took a trademark out on the name of steam beer.
The main thing that makes this style unique is that it's actually fermented with lager yeast.
However, it's fermented with lager yeast at higher temperatures.
So it ends up still giving you some of those fruity fermentation flavors.
As a result, it's more or less just kind of like a unique take on an American Amber ale.
Next up, we have the giant category of IPA sub styles.
There are nine different styles that sort of fit under this IPA umbrella and brewers are constantly experimenting and trying new things.
It's entirely possible that by the time this video comes out, there will be another IPA sub style in existence.
The core style of the IPA group is the American IPA.
American brewers of kind of the early 90s interpretation of that historic English IPA style, more hop aroma, and more bitterness, more alcohol, just like more everything.
One of the things that keeps the IPA style and really the whole IPA family fresh and exciting is that hop breeders and growers continue to release new varieties of hops that bring sort of an additional palette of flavors to brewers' arsenals.
At this point, the American IPA style has been exported all across the world.
Today, American IPA's, are significantly more popular with brewers in the UK than English IPA's are, you also can find American IPAs throughout Asia and central and South America where they call it IPA.
So very, very versatile style, extremely popular one that I'm sure many viewers are familiar with.
Then we have double IPA.
The easiest way to think of double IPA is like American IPA, but more, there was definitely a period of time, probably in the 2000s and early 2010s where breweries kind of were pushing to see who could make the most bitter and most intense beer.
Double IPAs are definitely a product of that time.
And that sort of like hops arms race.
Now the new England IPA style sometimes referred to as hazy or juicy IPA is a style that really only came around in like the last five years or so, but came into popularity very, very quickly and has come to become like a dominant force in the beer space.
The calling card for these styles is their hazy appearance, a lot of times these beers are so hazy that they're almost kind of opaque, might even look like a glass of orange juice in some cases, and really favors kind of what people term like juicy hot flavors.
So think things like orange juice, mango, pineapple, kind of these really robust tropical flavors.
And then we have whole host of different specialty IPA categories.
The first group that we talk about is sort of color variations on the IPA style.
Most classic IPAs are either like pale golden or amber in color.
We have several different variations that play on different colored versions of IPA, The four that exists are white IPA, red IPA, brown IPA and black IPA.
So in the case of white IPA, we have wheat being used, in the case of red IPA, oftentimes you're gonna see some amount of caramel malt use to give it that reddish hue, a little bit of sweet caramel flavor.
Brown IPAs will use some more heavily toasted malts think like a chocolate malt, which will give it toasty brown bread, or maybe even like chocolate flavors.
Black IPS are going to use deep bitter black malts.
Belgian IPA involves a variation on the way that the beer is fermented.
Belgian new strains tend to be more characterful than American new strains producing a lot more like fruity characteristics and giving it an interesting blend of fermentation characteristics and hop flavors.
The last IPA of the group is rye IPA, which unsurprisingly is made with rye.
One common misconception that people have is that ride brings this sort of spicy Caraway flavor.
And that misconception exists because rye bread is almost always flavored with Caraway.
That's not the case though.
It's more along the lines of like the difference between bourbon whiskey or a rye based whiskey.
American barley wine is the American take on the English barley wine style.
American barley wines are usually like bracingly bitter and have a fair amount of hop flavored aroma.
These are seriously intense beers, definitely sipping beers.
American strong ale is a bit of a catch all category for various Imperial versions of hoppy styles.
Think like anything labeled like an Imperial red ale or an imperial amber, something in that category.
One early and well known example of this style was Stone's arrogant bastard ale.
It was just like a really aggressive high alcohol hoppy beer.
Next step, we have a German bitter ale talking about altbier, which is indigenous to the town of Dusseldorf.
And this style, at least the production of the style has a lot in common with the way that Kolsch is made.
Altbier is typically amber in color was sort of like toasty bready malt flavors.
The altbier style is honestly kind of challenging to find outside of Dusseldorf.
However, if you are ever in Dusseldorf, the four traditional alt beer breweries are all located in and around the Alt stop the old part of the city and are all within about a 15 minute walk of one another.
Last step in the hoppy ale category, we have Australian sparkling ale, Australian sparkling ale is a unique style.
Typically only found in Australia.
And it's really typified by the products from the Cooper's brewery.
It's a little bit like an English bitter, but typically going to be paler in color with less malt flavor and significantly higher carbonation.
Fruity and or spicy ales.
You've got 12 total styles to cover here.
And we're now moving into beers that are dominated by their fermentation flavors.
It's worth noting, for the most part, the beers in this category are not actually made using fruit or spices.
These flavors are coming entirely from the fermentation.
Styles in these groups use very expressive yeast strains that tend to produce high levels of a group of flavor compounds known as esters, which calmly give beer fruity characteristics think like banana, apple, pear, sometimes peach.
Some of the strains used for beers in this group can also produce a type of flavor compound known as phenols.
Those phenolic flavor compounds usually give beers spicy sorts of characteristics along the lines of like clove, nutmeg or white or black peppercorn.
Our first group of styles is the German hefeweizens type styles.
So there are a few words worth knowing the translation of.
Heifer translates to yeast.
Then the other two words that are commonly used in association with this style are either weizen which translates to wheat or weiss, which translates to white.
So the first member of this family is the weiss beer style.
And weiss beer goes by a few different names.
Sometimes you'll see it labeled as hefeweizen, sometimes as half of vice beer.
Brewers of this style use a very special yeast strain that gives the beer a lot of banana and clove flavor characteristics.
These beers are also very, very highly carbonated.
When you see them served, they're usually served in these tall vase like glasses that allow for two or three inches of foam to form on top.
Next up is Dunkel's weissbier.
It has a lot of similar fermentation flavors to weissbier.
So still that banana clove profile, but also gets the addition of some amount of a darker colored malt.
Lastly, in this category, we have a beer that's not truly a weissbier.
This is Rogenbier.
It's a beer made with rye rather than wheat.
It's more or less a rye based take on the Dunkel's weissbier style.
[guitar music] The first one of the bunch witbier is actually made with spices, though it does also usually have a character full fermentation as well.
Belgian witbier is usually spiced with both coriander and orange peel, which gives it sort of citrusy floral notes.
This specific style basically died out in the 1950s and would have been probably totally lost to the world where it not for this one guy, Pierre Celis who founded the Hoegaarden brewery through his production of this style.
He sort of slowly brought it back to prominence.
And today it's a popular beer style, among both small and large breweries.
It's just a beautiful, easy drink of beer.
Saison is a really exciting yeast driven style that allows for a really broad range of interpretations and many people think of Saison as sort of the quintessential farmhouse style.
When you see a beer that's labeled sort of like farmhouse, oftentimes it will fall into this Saison category.
[guitar music] So the brand named Duvel actually translates to devil and some people surmise that the name is a reference to how the beer can kind of sneak up on you.
If you have a couple of these sitting down and aren't paying close attention, you might find yourself in a different place than you intended.
As a result, many of the other brands within this style that are produced today, bear illusions to the devil.
You get names such as Brigand or Lucifer, Beelzebub.
So you see a lot of devil references when it comes to [folk music drowns out conversation] Lastly, in the category of sort of these fruity spicy ales, we have a series of four different monastic beers that are also all typically made in Belgium.
Now these beers feature a numerical naming system.
The style names are Trappist single, Belgium double, Belgian triple, and then the top one is technically called Belgian dark strong, but is oftentimes referred to as quadruple.
Now there's no actual doubling or tripling of any of the ingredients or specific characteristics of the beer, but the beers do get stronger as you progress from single to quad.
To this day, a lot of these styles are produced by Trappist breweries located in Belgium and other parts of the world.
Trappist breweries are housed within Trappist monasteries and have to follow a number of strict guidelines in order to have their beard labeled as Trappist.
And the beers are generally regarded worldwide for their high quality.
The Trappist single style is probably the least commonly seen of any of these four.
The single style also sometimes referred to as like a Potter's beer is usually reserved for the monks at the monastery where it's brewed.
This is a beer that they would kind of drink every day alongside their meals.
And as such, it's not usually packaged or distributed, very widely.
Belgian dubbles a traditional Belgian style dating back to the early 1900s.
It was first produced at the Trappist brewery Westmoreland in the 1920s, Belgian doubles are usually amber to brown in color, and typically present with a lot of flavors of like brown sugar, sometimes molasses, maybe even a little bit of chocolate.
However, as is the case with many of these Belgian styles, Belgian dubble is actually a pretty highly attenuated style, which means that most of the sugar has been fermented out.
It's a dry beer with very little residual sugar.
Belgium tripel dates to a similar timeframe as Belgian dubble.
Tripel was first brewed also by Westmoreland in the 1930s and is a pale beer.
Belgian dark strong ale.
Also sometimes known as quadruple drinks, a lot like a strong version of a Belgium dubble, but usually two to three percentage points higher in alcohol.
Tart and or funky beers.
Pretty much anytime you encounter high levels of acidity in beer, that's going to be the result of a bacterial fermentation, usually bacteria that are producing lactic acid, maybe also bacteria that produce a acenic acid.
Now bacterial fermentation can sound kind of scary, but lactic acid bacteria are the same bacteria that produce yogurt.
So some of those kinds of tart flavors that you might find there are similar to the flavors that you have encountered in these beers.
These beers also sometimes incorporate so-called wild yeasts.
One of the most common that gets used as a yeast known as for batenamisis sometimes just referred to as Brett and the flavors that batenamisis produces in beer don't always sound super pleasant on first blush, things like horse blanket wet wool, barnyard.
There are characteristics that in isolation don't necessarily sound like they'd be good things, but at low levels, they can offer a really pleasant point of complexity in these beers.
So first step in this category, we've got two different tart German wheat beer styles.
The first is Berliner weisses, which is traditionally bracingly acidic.
Usually very highly carbonated, but still due to its kind of light body and low alcohol content.
They're usually very refreshing beers.
Historically bars would sometimes serve these beers with flavored syrups.
One of the stranger ones that was actually pretty prevalent was a syrup known as Woodruff syrup.
It had flavors of kind of like earth and hay, but more unusual is like bright green in color.
These days, they are honestly, aren't a lot of Berliner weisses being made in Germany.
You're more likely to see a smaller craft brewers making the style in places such as the US.
Style given its acidity lends itself really well to the addition of fruit.
So you'll see brewers adding things like peaches or raspberries or any number of different fruits to Berliner weiss style brews.
Gose is honestly a rather strange historic tart wheat beer style that was pretty obscure and virtually unheard of 10 years ago.
But in the last decade has just exploded in terms of its popularity.
Gose drinks, kind of like a mix of a Berliner weiss and a Belgium witbier.
So it has kind of the lactic acidity of Berliner weiss, has the coriander that you get from witbier, but then it also has kind of its own unique twist.
in that goses are usually made with the addition of salt.
At low levels, adding salt kind of enhances the body and the perception of sweetness of the beer.
The remainder of the tart and funky beer category is comprised of five distinct styles that come from Belgium.
We'll start with these Flanders or Flemish red and brown beers.
Belgians often don't make a clean distinction between these two styles.
In fact, Belgians generally don't talk about style nearly as much as we do over here.
Flanders red ale is a tart red beer from Western Flanders.
And the style is really typified by the products from a brewery named Rodenbach.
In addition to their acidity, these beers feature tons of fruit character, think like black cherry and current, lots of fruit notes present in these beers.
Sometimes these beers are referred to as like the burgundies of Belgium due to their similarities or overlaps with certain red wines.
And in that vein, this is a beer that I'd very much like to use with somebody who considers themselves a wine drinker, but not a beer drinker.
I've definitely won over audiences of wine drinkers with this beer.
To develop their acidity, these beers are typically aged in really large oak vats.
Of the two Flanders styles.
Oud Bruin is a little less common than Flanders red ale, but the style is indigenous to East Flanders and is typified by the products from the Liefmans brewery.
The last group of tart and funky beers that we have to discuss is the lambic family of beers.
And honestly, I think that these are some of the most fascinating beers made anywhere in the world.
There are a number of things that are very unique about the way that lambic is produced, but probably the most unusual facet of their production is the way that they're fermented.
Normally in the process of making beer brewers first produce what is called wort.
This is the sugary and hot liquid that then gets fermented by yeast.
However, lambic brewers take a very different approach.
Instead following the production of the work, they transfer that wort into a cool ship where they allow it to cool overnight, cool ship, basically being a large shallow basin.
And as the wort cools, the bacteria and yeast present in the air and the brewery begin to grow in the wort, essentially spontaneously inoculating the wort.
The base beer produces way is generically referred to as lambic and lambic is a style in and of itself.
Sometimes you will see straight lambics or however that's rather rare.
You're far more likely to see this lambic beer after it has aged for a couple of years, being used to produce two other more common styles, Gueuze and fruit lambic.
So Gueuze is typically a blend of several different vintages of lambic beer usually some amount of one year, some amount of two year and some amount of three year old lambic beer.
After these different vintages of lambic are blended together, the beer is re fermented in the bottle to achieve a very, very high level of carbonation.
When it's finished, it almost drinks kind of like a funky champagne.
Lastly, we have fruit lambic, which once again is taking that sort of tart funky bass lambic beer, but then the brewer is gonna add fruit to it and allow the fruit to go through another fermentation.
The most common fruits that you see used for these beers are raspberries in which case the beer was known, as fiend was or cherries.
In which case the beer is known as Creek.
The final category we're covering is smoked beers.
And there are only three traditional styles that we're covering in this category, but they have such a unique flavor and such unique characteristics that they really just couldn't be put anywhere else.
Now, there are a few different ways that brewers can impart smoky flavors to beer, but the most common one is going to be through the use of smoked malt.
Basically during the last stage of the malting process, when the malt would normally be dried with hot air, instead it's dried with air from some sort of fire that imparts those smoky flavors and characteristics.
Furthermore, the specific flavors that you get from those smoke malts are very much determined by what fuel is used for the fire.
The first smoke style that we'll cover is the German style, Rauchbier.
Rauch is the German word for smoke.
So Rauchbier just means smoke beer in Germany.
The malts that get used to make classic German Rauchbier are going to be smoked with Beechwood, which gives characteristic flavors of ham, bacon or sort of like campfire notes.
But some of these beers, particularly the ones that feature high levels of smoke malt, can be downright meaty in character.
The second beer that we're talking about in this family has a beer known as Piwo Grodziske also sometimes referred to as Grazter.
And this is a Polish smoked style and it's made with Oak smoked malts.
Now Oak smoke tends to be a little bit softer than Beechwood smoke.
So while this beer is still typically fairly intense in its smoke character, it's definitely a bit softer and more approachable than a classic Rauchbier often is.
The beer is also very highly carbonated and pretty low in alcohol.
So while you might not think of a beer that tastes kind of meaty as being a refreshing beer, it's actually pretty easy drinking beer.
And lastly, we have Lichtenhainer, which is a true historical oddity.
One way to think of it is basically like a smoked Berliner weiss, but with a little bit softer acidity to it.
While a hundred different beer styles definitely covers a lot of ground.
There's still a fair amount of beer on the market today that doesn't neatly conform to a specific beer style.
Commonly brewers will take an existing style and modify it either using unique ingredients or perhaps a unique technique to produce an entirely new creation.
So one common variation is beers that fall into the broad group of American wild ales.
This would be a brewer taking a style and using either bacteria or wild yeast to ferment it.
Like we saw in some of those tart and sour beers.
Two really common categories of variations involve both fruit beers and spiced beers, beers where brewers are gonna be adding some sort of fruit or some sort of spices.
In some cases, a mix of the two in order to create something interesting.
Some beers will leverage alternative sources of fermentable sugar.
Most beer is going to be a made with malted barley.
Some beers also include things like malted wheat, but there's a bunch of other ingredients that can be used.
Things like oats spelt, rye, millet or even things like molasses or agave.
Smoked beer is another somewhat common variation.
We talked about three specific styles that use smoked malt in heir production brewers today tend to experiment all the time.
And so you can make any style into a smoked style by adding smoked malt to it.
Wood aging is another technique that brewers can use to create variations on their styles.
There are a lot of different types of barrels that brewers can use, but in a lot of cases, you'll see brewers using spirit barrels, things like bourbon or other types of whiskey, maybe run barrels sometimes even like tequila barrels.
And then the last category is just a total catchall for whatever weird things brewers are cooking up these days.
This is specialty beers and this kind of included mixes of some of the above categories.
If a brewer wanted to make like a smoked beer with fruit or like a barrel aged beer with spices.
That's where this would fall.
[electric beer music] All right, that was each and every beer style.
I don't know about you guys, but all this talking about beers made me pretty damn thirsty.
I had a lot of fun talking to you guys, and I really hope that all this helps you find new beer styles in your journey.
Cheers.
Every Overwatch Hero Explained by Blizzard’s Michael Chu Every Video Game in 'Ready Player One' Explained By Author Ernest Cline Every Dinosaur In 'Jurassic Park' Series Explained Every Hero in 'Avengers: Infinity War' Every Spider-Man Movie & TV Show Explained By Kevin Smith Every Character in Mortal Kombat 11 Explained Every Legend in Apex Legends Explained Every Toy in Toy Story Explained Every Major Movie Reference in Stranger Things Every Rainbow Six Siege Operator Explained Every Top Toy of the Last 50 Years Every Stormtrooper in Star Wars Explained Every Starfighter in Star Wars Explained Every Top Video Game in the Last 40 Years Every Dog Breed Explained (Part 1) Every Star Trek: Picard Easter Egg Explained Every C-3PO Costume Explained By Anthony Daniels Every Dog Breed Explained (Part 2) Every Hidden Reference to Future Pixar Movies Explained Every Batmobile From Movies & TV Explained Every Job Homer Simpson's Ever Had Every Transformers Generation Explained Every Job Homer Simpson's Ever Had (Part 2) Every Style of Beer Explained Every Mortal Kombat 11 Ultimate Friendship Explained By Ed Boon Every Starfighter From Star Wars: Squadrons Explained Every Superpower From Zack Snyder's Justice League Explained Every Ape in Planet of the Apes Explained Every James Bond Car Explained Trauma Surgeon Breaks Down Every Home Alone Injury Every Batman Movie Villain Explained Food Scientist Breaks Down Every Plant-Based Milk Marvel vs Norse Mythology: Every Norse God in Thor Explained How PlayStation 5 Was Built Every Spider-Man Suit From Marvel's Spider-Man: Miles Morales & Spider-Man Explained Every Champion in League of Legends Explained Every Jedi & Sith From Star Wars Explained By Kevin Smith Every Bone in the Human Body Explained Using John Wick Fighter Pilot Breaks Down Every Fighter Jet From Top Gun: Maverick Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
109 | 2,021 | "Everything Apple Just Announced: M1 Pro and M1 Max, MacBook Pro, AirPods 3 | WIRED" | "https://www.wired.com/story/everything-apple-announced-october-2021-macbook-pro-airpods-3" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Brenda Stolyar Parker Hall Gear Everything Apple Announced Today—Including a New MacBook Pro Save this story Save Save this story Save iPhones and iPads shared the limelight on Apple's virtual stage in September , but the company's October “ Unleashed ” event focused on all things Mac. Today, Apple announced a redesigned MacBook Pro in two sizes, both of which are powered by its newest M1 Pro or M1 Max chips. Apple also took the wraps off its third-generation AirPods.
If you didn't catch the event, here's everything Apple announced.
If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Photograph: Apple It's been more than a year since Apple announced it was swapping Intel chips for its very own in-house silicon: the M1 , which powers both the MacBook Air and 13-inch MacBook Pro.
Succeeding the M1 are not one but two new chips: the M1 Pro and M1 Max.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Photograph: Apple The M1 Pro has a 10-core CPU (up from eight cores on the M1), with eight high-performance cores and two high-efficiency cores. For graphics, the M1 Pro offers an up to 16-core GPU (with up to 32 GB of unified memory) that's twice as fast as the M1. Meanwhile, the M1 Max features the same 10-core CPU coupled with a 32-core GPU (with support for up to 64 GB of unified memory). Apple claims both the M1 Pro and M1 Max are up to 70 percent faster than last year's M1, and graphics-wise, the M1 Pro is two times faster and the M1 Max is four times faster.
The power of the M1 Pro and Max remains to be seen, but it's safe to say these chipsets are what on-the-go content creators, video editors, and graphic designers (who rely on their MacBook Pros to accomplish intensive tasks) have been waiting for since Apple started moving away from Intel.
Photograph: Apple Apple went all-in with its MacBook Pro redesign. It comes in a 14- or 16-inch chassis, with slightly larger screen sizes at 14.2 inches and 16.2 inches, respectively. Both feature a Liquid Retina XDR screen with Apple's Mini LED display technology, which debuted in this year's 12.9-inch iPad Pro.
It doesn't produce as deep blacks as OLED panels, like on the iPhone, but it comes very close and maintains incredible levels of brightness with punchy colors. WIRED reviews editor Julian Chokkattu says he preferred watching movies on the iPad Pro with Mini LED over the larger LCD screen in Apple's 2021 iMac.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So It comes complete with ProMotion (as seen on the iPhone 13 Pro and 2017 iPad Pro), which is Apple's 120-Hz refresh rate technology that makes content on the screen look much smoother. You can read more about how it works here.
On top of the screen is a notch that houses a 1080p camera for video calls, with a wider aperture that allows in more light, so expect better video call performance in dim rooms. But unlike the notch in the iPhone, there's no TrueDepth camera system here, which means no support for Face ID.
There is a Touch ID sensor on the keyboard, so you can still lock and unlock the MacBook Pro with your fingerprint. Apple also bid adieu to the Touch Bar , replacing it with physical keys instead, a startling admission that its vision for the elongated digital screen didn't go the way it hoped.
Photograph: Apple But the most exciting upgrade to the MacBook Pro is arguably the return of the ports.
There's an HDMI port, three USB-C ports with Thunderbolt 4, an SD card slot, and a high-impedance headphone jack. So, yes, feel free to throw all those ugly dongles in the trash. Even better, Apple also brought back MagSafe to its MacBooks for the first time since 2017. It's not an accessory ecosystem like with the iPhone 12 and iPhone 13 lineup, but the charger connects to the dedicated port magnetically like in the days of old. You can still charge via the USB-C ports.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Both MacBook Pros also pack studio-quality mics and a six-speaker sound system that consists of two tweets and four woofers that offer 80 percent more bass. As for battery life, Apple claims the 14-inch MacBook Pro offers up to 17 hours of video playback, while the 16-inch model hits 21 hours. You can fast charge these devices too, gaining up to 50 percent battery in just 30 minutes.
The base version of the MacBook Pro (for both sizes) comes with 16 GB of RAM and 512 GB of storage. The 14-inch MacBook Pro starts at $1,999 while the 16 inch is $2,499.
Both models are currently available for preorder and go on sale on October 26. You can choose whether you want the M1 Pro or upgrade to the M1 Max if you need the extra power. The highest-tier configuration for the 16-incher brings the total to a whopping $6,099.
Apple also confirmed via its press release that its latest operating system, MacOS Monterey, will be available for download on October 25.
Photograph: Apple The new, third generation of Apple’s standard AirPods comes two years after the release of AirPods Pro and a little over a year after the release of the AirPods Max.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So The old AirPods aren't our favorites. They're not very ergonomic, nor are they great for workouts because they lack the security of silicone eartips and an IP rating against water or dust. The lack of tips also meant you didn’t get a perfect seal in your ears all the time, allowing sound to leak out to the outside world at higher volumes.
This new version comes with a lower distortion dynamic driver for better bass and crisper high-end, but once again lacks eartips. Apple says the slightly redesigned earbuds, which look a bit more curvy and ergonomic, will fit much better than the previous version. We'll see for ourselves.
Thankfully, they now come with sweat and water resistance, so you can finally work out in AirPods without worrying about breaking them. The addition of spatial audio, which was previously reserved for the AirPods Max and AirPods Pro models, is also great.
Spatial audio on standard AirPods means you can watch movies in Dolby Atmos, providing a more immersive experience (a few music artists also mix in Atmos, but not many).
Photograph: Apple Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Apple increased battery life from 5 hours to 6, with the ability to charge with a MagSafe charger wirelessly. It charges quickly, so five minutes plugged in will get you an hour of juice. That’s all well and good, but 6 hours remains relatively mid-tier battery life in the wireless headphone market, especially without noise-canceling on board. The latest model from Jabra, for example, comes with 8 hours and the same quick-charge feature.
The new AirPods are currently available for $179 , which begs the question: Why not spend $20 more on a pair of AirPods Pro ? Well, you can. But Apple also seems to have added MagSafe to the AirPods Pro's case.
If you don't care for it, the original AirPods Pro retails for less than $199.
Photograph: Apple One of the weirder announcements was Apple's new voice-only subscription plan for Apple Music. For $5 per month, subscribers can opt to abandon any visual interface for Apple’s popular streaming service and access their favorite artists and bespoke playlists exclusively via voice control with Siri.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So It sounds odd at a first glance, but it's a decent option for anyone who loves internet radio like Pandora, or for anyone looking for specific genres of music rather than scrolling through infinite artists and playlists on their phone. You're paying half the price of the standard subscription.
Photograph: Apple Tired of staring at your boring gray HomePod Mini? Apple has some good news for you. It now comes in three fun colors: orange, blue, and yellow. Too bad they won't do anything new. Apple didn't spell out any new features for the Siri-powered bowl, but at least it has the same $99 price.
📩 The latest on tech, science, and more: Get our newsletters ! Weighing Big Tech's promise to Black America Alcohol is the breast cancer risk no on wants to talk about How to get your family to use a password manager A true story about bogus photos of fake news The best iPhone 13 cases and accessories 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Product Writer & Reviewer X Writer and Reviewer X Topics apple macbook pro macos airpods Headphones laptops Shopping Brenda Stolyar Brenda Stolyar Brenda Stolyar Brenda Stolyar Nena Farrell Matt Jancer Scott Gilbertson Brenda Stolyar WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
110 | 2,019 | "Apple Mac Pro (2019): Specs, Features, Release Date | WIRED" | "https://www.wired.com/story/apple-mac-pro-2019" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Scott Gilbertson Julian Chokkattu Gear Apple's Powerful and Pricey Mac Pro Arrives in December Apple's newest hardware products are the Mac Pro (left) and the Pro Display XDR with its optional stand.
Apple Save this story Save Save this story Save Updated on November 13: Apple has announced a slightly firmer December release window, along with new improvements to the Mac Pro, like its new 8 TB storage option and the ability to edit up to six simultaneous streams of 8K video.
At its annual developers' conference in June, Apple finally delivered what the designers, photographers, video editors, and other pro-grade creatives who grew up using the company's machines have been waiting for: multiple references to the progressive rock band Rush.
Sadly, this news was soon overshadowed by the insanely powerful new Mac Pro.
Pity the unfaithful who gave up on the long-neglected previous version of the Mac Pro and bought the recently upgraded iMac instead, because Apple has finally created a Mac Pro worthy of the name. The look of the computer also harkens back to the design language used on the Mac Pro from two generations ago, which means that yes, Apple's top machine once again looks like a huge cheese grater.
WWDC keynotes usually shun specs, but Apple peppered its onstage routine with stats and figures for the announcement of the new desktop computer, touting the details of graphics cards and brightness nits in the monitor. Apple has clearly been taking notes from its professional users because these are exactly the kind of details the pro crowd cares about. The new Mac Pro is, by design, a high-end machine. It also has a high-end price tag that not many will be able to justify.
Apple The new Mac Pro starts at $5,999 for the 8-core model with 32 GB of RAM and a 256-GB solid-state drive. That can be configured up to a 28-core model with 1.5 terabytes of RAM, and while Apple initially only offered up to 4 TB for the SSD, you can now go all the way up to 8 TB. A new Pro Display XDR monitor—a new Apple product as well—to go along with your workstation will set you back another $4,999 for the base model, bringing the cost of a full setup to $11,000. And that's just the entry-level configuration.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So The new Mac Pro is all about processing power and graphics. It can handle as many as four AMD Radeon Pro Vega II graphics cards, which Apple originally said would net you enough power to play three simultaneous streams of 8K video, but optimizations to Final Cut Pro will now allow up to six simultaneous 8K streams.
Apple has bucked some of its own design trends by making the Mac Pro's case easy to open. It's a user-upgradable Mac with up to eight PCI Express expansion slots—twice as many slots as on the last Mac Pro, which debuted back in 2013. This one also has attachable wheels and is designed to work as a rack-mounted system as well.
Apple is also touting a new hardware acceleration card it calls Afterburner. It's the magic behind the Mac Pro's ability to handle those six simultaneous streams of 8K ProRes RAW footage, which is what you get from RED and similar high-end cameras used for professional filmmaking. With the graphics card handling the video playback, you can use all those primary CPU cores to handle creative effects and other processing tasks.
Apple Even the most powerful video-editing workstation is nothing without a display that can best represent the machine's output, and for that Apple has delivered something that might be more impressive than the Mac Pro. The Pro Display XDR is a 32-inch Retina 6K monitor. It boasts up to 1,600 nits of brightness, sustaining 1,000 nits indefinitely—that's an impressively high output and is only achievable because the back of the monitor is heavily vented so the guts don't overheat. The rear venting uses the same cheese-grater pattern as the new Mac Pro for some visual synchronicity. A unique hinge mechanism allows for height and angle adjustments and a 90-degree portrait-mode orientation.
The contrast ratio of the Pro Display XDR is one million to one. In case you aren't an expert in monitors, that puts the new hardware in the class of what's called "reference displays." These displays are insanely expensive tools (think mid-five-digits), used primarily in high-end production shops. For those in the video production industry, the Pro Display XDR's $4,999 price tag probably sounds like a fire sale.
The new hardware may be worth every penny, but it definitely costs a lot of pennies. We'll know for sure when both machines arrive this December.
Pompeo was riding high— until the Ukraine mess exploded 13 smart STEM toys for the techie kids in your life The Icelandic facility where bitcoin is mined Inside Apple’s high-flying bid to become a streaming giant The untold story of Olympic Destroyer, the most deceptive hack in history 👁 A safer way to protect your data ; plus, check out the latest news on AI 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Senior Writer and Reviewer X Reviews Editor X Topics apple Mac Scott Gilbertson Scott Gilbertson Reece Rogers Carlton Reid Boone Ashworth Virginia Heffernan Boone Ashworth Boone Ashworth WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
111 | 2,023 | "Watch Blizzard's Ben Brode Answers Hearthstone Questions From Twitter | Tech Support | WIRED" | "https://www.wired.com/video/watch/blizzard-s-ben-brode-answers-hearthstone-questions-from-twitter" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Blizzard's Ben Brode Answers Hearthstone Questions From Twitter About Credits Released on 03/20/2018 My name is Ben Brode, I'm the game director of Hearthstone and welcome to Hearthstone Support.
Diederik, does nobody at Blizzard have a better idea than the coin to balance going second in Hearthstone? The answer is no, nobody does.
We did have a lot of ideas about how to balance going second, the first idea was just the player who goes second gets an extra card, that's it, and we noticed that with just that one change, the first turn advantage was 60% vs 40%, that's like a 20% difference, it's significant right? And we wanted to get it to about 55%, 45%, that's about the first turn advantage in chess.
And we tried a couple things, the first was what if the player who goes second starts with more life right? Because what often happens is first player has an advantage, they put the second player on the back foot, and they struggle back but over time that extra card makes a bigger difference, especially once you get 10 mana and it starts to even out, and they can come back so let's give the second player a little bit more time but it didn't make the gameplay feel good enough because they were always on the back foot so we tried things like what if player two starts with a 1/1 minion in the battlefield? What if player two's first minion gets +1/+1, it's just got a free buff for their first minion? And these types of changes really started to make player two feel like they could overcome that advantage and get back into the game and start fighting it out.
But the thing that made us feel the best was the coin.
It does a couple things, one it's a very active thing, you get to choose how you use the coin.
It's pretty skill-testing, figuring out if you want to use it early in the game or for a big swing turn later on.
It creates a moment where when you play the coin, your opponent goes uh oh, it's the coin turn, what are they gonna do? And that's cool too, and it does significantly affect the first turn advantage.
We got I think the last time we looked at it it was like 51% to 49%, it was very close.
It's closer than a lot of other strategy games, and we think that's great, the coin's doing a great job for us.
From Spivey of the Ebon Blade, what are the chances we see cards added to the classic set? I actually think this is very high.
The classic set is meant to set the baseline fantasy of Warcraft in the Hearthstone universe.
It's mean to provide players with a baseline of power so that they can build some satisfying decks.
The classics is one of the most powerful sets in the game right now, actually kinda become a problem for us because it doesn't rotate like the other sets do so if there's a really strong deck that uses almost entirely classic and basic cards, the standard around it just won't change, it'll be the same every year, I think that's we needed a format that does change every year, that's why we have the standard format.
So we've been rotating some cards out of classic and that's been causing some disparities in the number of cards per class.
Mage has less cards than any other class in standard right now because of the Hall of Fame.
So I think we have some opportunity to give Mages a couple more cards, we'd have to do it carefully because like if we add too powerful cards to Mage then we have the same problem we had before the Hall of Fame rotations where the decks are the same every year, Mages playing the same cards every year.
So it's okay that they're playing some cards every year, I think there's a baseline of cards that's good to have, it's very powerful in those sets, we did rotate some into the Hall of Fame, I think we'll bring some back, maybe less powerful ones, maybe ones that set the class fantasy really strongly and also give you an idea of what that class identity is but I'm not sure what those would be just yet.
From Sebastian Quiroga, what led to the decision to add hero characters like Lord Jaraxxus? It was a really creative mechanic at the time.
Actually, this is a great story because we were doing a play test with one of the engineers from the World of Warcraft team, Pat Dawson, and this is way back in the early days of Hearthstone and we were still experimenting with ways to represent the Alliance and the Horde as different factions, which we eventually scrapped, but every class had an Alliance hero and a Horde hero.
And for us, the first Warlock Horde hero was Cho'gall, and the Alliance, we had trouble coming up with a satisfying Alliance warlock hero, so we went with the one that I could remember most, which was Wilfred Fizzlebang from the Grand Tournament.
And Pat Dawson sits down for the play test and chooses Wilfred Fizzlebang and in World of Warcraft, Wilfred Fizzlebang tries to summon a fearsome Doomguard for the players to battle, but he accidentally summons Jaraxxus, eredar lord of the Burning Legion, who destroys Wilfred Fizzlebang and takes the place of the Doomguard as the new boss that the players have to battle.
So Pat Dawson sits down and he selects Wilfred Fizzlebang and he says Wilfred Fizzlebang, is there a Jaraxxus card in this deck? And there wasn't, and I was like oh, that's a huge mistake, and so I immediately went back to my desk and I designed Lord Jaraxxus, who when you summon him, he destroys you and takes your places as the new boss and that's how we came up with that whole mechanic.
Alright, from MagicianMoo, if we have nerfs in the game, how 'bout buffs? I believe there are cards that would benefit from it.
I agree with that, there's a lot of cards that would benefit from a buff, but generally nerfs happen when something is way out of proportion with the rest of the cards and we have to bring it in line, but buffs imply that maybe we want to get everything in this thin, narrow band, a power level, and that's actually just not the goal right? We have cards that are intentionally bad, maybe because the play experience they create isn't as fun or we want to create a challenge for players who really like to win with bad cards or the segment of our audience that prefers beating you with cards you think are bad, it makes their wins feel better, so we don't necessarily want every card to be in that thin range, and also we like putting cards that in the future will make other cards better, right? A lot of the Paladin secrets were pretty weak, but that meant that we could create cards, like Mysterious Challenger that made Paladin secrets much better, and all of a sudden, with just a few cards, we could really change the meta and maybe we did too good of a job with Mysterious Challenger, but it lets us use those cards in the future to have a big impact when new cards are released.
From GreenTheAssassin, with three more sets becoming wild and possibly more wild players due to the current events and the rotation, are there plans to make wild sets purchasable with in-game gold? And actually, a lot of people don't know that you can actually buy wild sets and adventures on our websites through the Battle.net shop, you can also if you've ever bought like a piece of an old adventure while it was still in rotation, you can actually completely finish out the wings that you haven't bought yet with gold but we are trying to keep the in-game shop very clean and very simple, we don't sell a lot of different products there, just the recent expansions.
And so we've tried to keep that on the websites and out of the game client where possible, that said there's a lot of people asking for this, and it's something that we would consider going forward.
This is from Joshua the Unpaid Intern, you should fight for a higher wage, sir.
Will dungeon run be updated to go with the new expansion? Well, we just announced the newest expansion of Hearthstone recently, and we have some exciting single player content there, it does follow along with the theme of dungeon runs, the way that plays, but it's got a different take where we're exploring kind of a more monstrous side and doing some monster hunting, so it'll play a lot like dungeon runs, but it's got some fun twists I think players will enjoy.
Here's a question from Andrew is taking candle, which shame on you Andrew.
Which college degree do you think would help most with becoming a Hearthstone world champion? I mean this is controversial, I don't know, I would just say forget college, just play Hearthstone, there's no class on how to become a great Hearthstone player.
Find other Hearthstone players that are quite good, ask them for advice, put together a play group and play a bunch of Hearthstone, but don't let your parents see that answer, alright? Kristen, I haven't played Hearthstone in over a year, what? Oh, but I wanna get back into it, are there any resources that can help me? Yes, certainly, I mean there are lots of people online talking about Hearthstone on the Hearthstone subreddit or our forums, there are a lot of people streaming Hearthstone all the time, that's a great way to get back into Hearthstone, you'll see a lot of the popular decks or some of the new cards that have been released without having to go in and play on your own, but also you can just come in and play Hearthstone, we won't bite.
From NegativeRainbow, is tournament mode going to have sideboard support? Sideboards in Hearthstone would be so nice.
It's not, but I will say that tournament mode is kind of a misnomer with our in-game tournaments that we're working on right now.
It's a specific implementation of competitive support for Hearthstone, specifically meant for you and your friends or for small communities or fireside gatherings to get together and have a competitive experience together.
Now we're gonna start out pretty simple, we're gonna have things like conquest and last hero standing, you'll be able to choose a number of decks built into the tournament, but we wanna really listen to players, figure out how we can make that better over time and continue to add features and support going forward.
Often our tournaments only play with a single deck one time, if you win with this deck, you've won or if you lose with the deck, it's out of the tournament in case of last hero standing, and so you actually don't have a lot of opportunities to go in and out of the sideboard which is essentially a separate group of cards that you can sub in and out of your deck, so the current formats we're using wouldn't make great use out of it, but we'll talk about those kind of things going forward, I can't wait to get feedback from players about how we can make tournaments better.
From fcchan, why aren't there female kobolds? There are, hard to tell maybe.
From Mike, where did the inspire mechanic go? RIP.
Well, you know, we could bring it back some day, we don't have any plans that we've announced, but I really like when we come up with new mechanics that really explore new space, right? Inspire was a lot of fun for us to design around, we created a lot of cards in the Grand Tournament that really cared about interacting with your hero power in new ways.
We explored a lot of the space that we were interested to explore.
Obviously there's more space there to explore, I think we might go back there some day, but we have a lot of really fun ideas and we do want new sets to be exciting and groundbreaking and (laughs) inspiring, so we'll explore some new spaces for now, but who knows, we might go back and check out inspire in the future.
Austral, Ben, when are we gonna see Pepe in Hearthstone? At least a card back, we love that little bird, less than three.
Um I y'know we haven't talked about it, we had a pepe meeting that we were gonna try to figure out where to get him in, and it got canceled, somebody had a conflict so I'll get that back on the books and we'll chat about it.
From Cindercide, will there be ways to get old card backs? Yes, there will be ways to get old card backs, we've had actually a lot of conversations about what the best way to do this is.
I think there's an interesting thing here, right? Because if you just give them say hey everyone, here's all the old card backs, I think a lot of people would be pretty upset, right? We worked pretty hard to get those card backs, it makes us feel special that we have card backs other people don't have, so I think we have to make the friction to achieve an old card back pretty high.
With that said, if we make the friction too high, and make it feel kinda too expensive, I think that also could make people feel upset so we have to figure out the right balance there and the right way to deliver those old card backs that feels like it has enough friction but we're still working on that.
From Sam Spires, is cubelock a concern slash on your radar? Yes, it is on our radar, and I think it could be a concern, it's kind of interesting, it's a very skill-testing deck which is generally very good for Hearthstone, right? To have these decks that are in the hands of a player who hasn't played 100 games of it, pretty low win rate, in the hands of a player who's played over 100 games, a very high win rate.
That's good, that means there's a lot of interesting decisions to make and you play it differently against different decks so I like that and overall have to figure out is the power level too good? There are decks that are more powerful than it so that's not the only metric, but it's something to think about.
Is the next set gonna have an impact on its performance? That's another thing that we have to figure out as well, there's also another component which is just harder to use data to get at, which is how does this deck feel? How does it feel to play? Which I think actually pretty good, and how does it feel to lose to? Which I think we're still getting data on, some people really don't like losing to it and some people have found ways to interact with it in a more interesting way.
But we are looking at it, I don't think that we're totally unconcerned with cubelock or warlock, but I don't think we also wanna be too knee-jerky about how we respond to potential imbalances, especially with a new set on the horizon.
From Jthreau, hey bdbrode, what's your favorite golden card? I want to craft it in your honor.
Please don't say Milhouse.
I actually saw this tweet and already tweeted at him, I actually went through every card in the collection and looked for my favorite golden card and I saw Crushing Walls and I was like wow that's an incredible golden animation, the walls are literally crushing the people in the picture and so I mentioned that and he crafted it, so good on you.
From Darling in the FRANXX, with so many expansions out now and more to come soon, have you ever given any more thought to increasing deck sizes? Or perhaps making 60 card deck variants which uses its own separate game mode? Actually we literally have tried this exact idea.
Every game should feel different, it's fun when you have a different experience each time.
And that could be because players are coming at you with different decks or because the way that your deck plays out is slightly different each time.
And the more cards you have in your deck, in the smaller the limit on the number of duplicates you can have in your deck, the more different each game will play out.
So we actually tried a mode where you had to have 60 cards and they literally all had to be different and that created this really interesting experience where the games were very different over time.
It also has an effect that it's harder to build a cohesive deck, so I think it's the kinda thing that might be fun for a tavern brawl or something to play with for a while, but maybe not core mode for the game, at least right now.
From Mark Fanta, when are you going to add a spectate random match button to the menu? I love watching people play, but don't have many friends, oh, I'm sorry Mark, I think it's because I'm a slytherin, hashtag spectate hashtag Hearthstone.
Well we don't have a plan for that right now but you can go on Twitch and just select a random stream and just start watching somebody.
It's a great way to both see the game that's playing and hear their thoughts, which I think is a really fun part of the spectating experience, it's a lot harder to do through the in-game spectating.
From Nick period question mark exclamation point, will you ever add an award for fully leveling up your hero? Well right now we have golden cards that, as you level up your hero, you get more and more golden cards, you earn your class cards first and then the elusive basic neutral minions, and so you do get rewards all the way up to level 60.
I think it's one of our more satisfying progression systems because you get experience points whether you win or lose and it makes you feel better about losing if you get something, like some XP, so I'd like to extend that system in the future, we don't have any specific plans about how to do that, but I think it would be nice if that system lasted a little bit longer than it currently does.
Thomas Zdancewicz, with the reverse nerf on molten giant, does that mean that team five is looking at reverse other nerfed cards in the future when they rotate to wild, like spirit claws or the caverns below? Why reverse the giants nerf and not other cards? Molten giant has a following, I would say, really when we nerfed a bunch of cards going into the standard rotation, one of the biggest reasons behind some of those nerfs was to make sure that the standard environment had the ability to change over time.
Molten giant was specifically nerfed for that reason because there was a bunch of archetypes of decks that were specifically enabled by molten giant, so once we came up with the idea of the Hall of Fame, we really came up with that idea because of the community response to the molten giant nerf, then we could move cards out of the standard environment and into the wild format, which is great for playing the cards that you love in that format.
We felt like molten giant was the perfect card to move into that format, it's not too powerful there hopefully, and you get to play those loved decks in that format, but there are other cards that maybe are just too powerful, they're not enabling certain deck archetypes that I don't think are as good of a candidate to move to wild, but we're gonna analyze those on a case-by-case basis going forward.
Thanks for watching, I hope I answered your questions.
This has been Hearthstone Support with Wired.
Starring: Ben Brode Gordon Ramsay Answers Cooking Questions From Twitter Ken Jeong Answers Medical Questions From Twitter Bill Nye Answers Science Questions From Twitter Blizzard's Jeff Kaplan Answers Overwatch Questions From Twitter Nick Offerman Answers Woodworking Questions From Twitter Bungie's Luke Smith Answers Destiny Questions From Twitter Jackie Chan & Olivia Munn Answer Martial Arts Questions From Twitter Scott Kelly Answers Astronaut Questions From Twitter LaVar Ball Answers Basketball Questions From Twitter Dillon Francis Answers DJ Questions From Twitter Tony Hawk Answers Skateboarding Questions From Twitter Jerry Rice Answers Football Questions From Twitter Garry Kasparov Answers Chess Questions From Twitter U.S. Olympic and Paralympic Athletes Answer Olympics Questions From Twitter Neuroscientist Anil Seth Answers Neuroscience Questions From Twitter Blizzard's Ben Brode Answers Hearthstone Questions From Twitter John Cena Answers Wrestling Questions From Twitter The Slow Mo Guys Answer Slow Motion Questions From Twitter Bill Nye Answers Even More Science Questions From Twitter James Cameron Answers Sci-Fi Questions From Twitter Best of Tech Support: Bill Nye, Neil DeGrasse Tyson and More Answer Science Questions from Twitter Riot Games' Greg Street Answers League of Legends Questions from Twitter Riot Games' Greg Street Answers Even More League of Legends Questions from Twitter PlayerUnknown Answers PUBG Questions From Twitter Liza Koshy, Markiplier, Rhett & Link, and Hannah Hart Answer YouTube Creator Questions From Twitter NCT 127 Answer K-Pop Questions From Twitter Neil deGrasse Tyson Answers Science Questions From Twitter Ken Jeong Answers More Medical Questions From Twitter Bon Appétit's Brad & Claire Answer Cooking Questions From Twitter Bang Bang Answers Tattoo Questions From Twitter Ed Boon Answers Mortal Kombat 11 Questions From Twitter Nick Jonas and Kelly Clarkson Answer Singing Questions from Twitter Penn Jillette Answers Magic Questions From Twitter The Russo Brothers Answer Avengers: Endgame Questions From Twitter Alex Honnold Answers Climbing Questions From Twitter Sloane Stephens Answers Tennis Questions From Twitter Bill Nye Answers Science Questions From Twitter - Part 3 Astronaut Nicole Stott Answers Space Questions From Twitter Mark Cuban Answers Mogul Questions From Twitter Ubisoft's Alexander Karpazis Answers Rainbow Six Siege Questions From Twitter Marathon Champion Answers Running Questions From Twitter Ninja Answers Fortnite Questions From Twitter Cybersecurity Expert Answers Hacking Questions From Twitter Bon Appétit's Brad & Chris Answer Thanksgiving Questions From Twitter SuperM Answers K-Pop Questions From Twitter The Best of Tech Support: Ken Jeong, Bill Nye, Nicole Stott and More Twitter's Jack Dorsey Answers Twitter Questions From Twitter Jodie Whittaker Answers Doctor Who Questions From Twitter Astronomer Jill Tarter Answers Alien Questions From Twitter Tattoo Artist Bang Bang Answers More Tattoo Questions From Twitter Respawn Answers Apex Legends Questions From Twitter Michael Strahan Answers Super Bowl Questions From Twitter Dr. Martin Blaser Answers Coronavirus Questions From Twitter Scott Adkins Answers Martial Arts Training Questions From Twitter Psychiatrist Daniel Amen Answers Brain Questions From Twitter The Hamilton Cast Answers Hamilton Questions From Twitter Travis & Lyn-Z Pastrana Answer Stunt Questions From Twitter Mayim Bialik Answers Neuroscience Questions From Twitter Zach King Answers TikTok Questions From Twitter Riot Games Answers League of Legends Questions from Twitter Aaron Sorkin Answers Screenwriting Questions From Twitter Survivorman Les Stroud Answers Survival Questions From Twitter Joe Manganiello Answers Dungeons & Dragons Questions From Twitter "Star Wars Explained" Answers Star Wars Questions From Twitter Wizards of the Coast Answer Magic: The Gathering Questions From Twitter "Star Wars Explained" Answers More Star Wars Questions From Twitter VFX Artist Answers Movie & TV VFX Questions From Twitter CrossFit Coach Answers CrossFit Questions From Twitter Yo-Yo Ma Answers Cello Questions From Twitter Mortician Answers Cadaver Questions From Twitter Babish Answers Cooking Questions From Twitter Jacob Collier Answers Music Theory Questions From Twitter The Lord of the Rings Expert Answers More Tolkien Questions From Twitter Wolfgang Puck Answers Restaurant Questions From Twitter Fast & Furious Car Expert Answers Car Questions From Twitter Former FBI Agent Answers Body Language Questions From Twitter Olympian Dominique Dawes Answers Gymnastics Questions From Twitter Allyson Felix Answers Track Questions From Twitter Dr. Michio Kaku Answers Physics Questions From Twitter Former NASA Astronaut Answers Space Questions From Twitter Surgeon Answers Surgery Questions From Twitter Beekeeper Answers Bee Questions From Twitter Michael Pollan Answers Psychedelics Questions From Twitter Ultramarathoner Answers Questions From Twitter Bug Expert Answers Insect Questions From Twitter Former Cult Member Answers Cult Questions From Twitter Mortician Answers MORE Dead Body Questions From Twitter Toxicologist Answers Poison Questions From Twitter Brewmaster Answers Beer Questions From Twitter Biologist Answers Biology Questions From Twitter James Dyson Answers Design Questions From Twitter Dermatologist Answers Skin Questions From Twitter Dwyane Wade Answers Basketball Questions From Twitter Baker Answers Baking Questions from Twitter Astrophysicist Answers Questions From Twitter Age Expert Answers Aging Questions From Twitter Fertility Expert Answers Questions From Twitter Biological Anthropologist Answers Love Questions From Twitter Mathematician Answers Math Questions From Twitter Statistician Answers Stats Questions From Twitter Sleep Expert Answers Questions From Twitter Botanist Answers Plant Questions From Twitter Ornithologist Answers Bird Questions From Twitter Alex Honnold Answers MORE Rock Climbing Questions From Twitter Former FBI Agent Answers MORE Body Language Questions From Twitter Waste Expert Answers Garbage Questions From Twitter Garbage Boss Answers Trash Questions From Twitter J. Kenji López-Alt Answers Cooking Questions From Twitter Veterinarian Answers Pet Questions From Twitter Doctor Answers Gut Questions From Twitter Chemist Answers Chemistry Questions From Twitter Taste Expert Answers Questions From Twitter Paleontologist Answers Dinosaur Questions From Twitter Biologist Answers More Biology Questions From Twitter Biologist Answers Even More Biology Questions From Twitter ER Doctor Answers Injury Questions From Twitter Toxicologist Answers More Poison Questions From Twitter Energy Expert Answers Energy Questions From Twitter BBQ Pitmaster Answers BBQ Questions From Twitter Neil Gaiman Answers Mythology Questions From Twitter Sushi Chef Answers Sushi Questions From Twitter The Lord of the Rings Expert Answers Tolkien Questions From Twitter Audiologist Answers Hearing Questions From Twitter Marine Biologist Answers Shark Questions From Twitter Bill Nye Answers Science Questions From Twitter - Part 4 John McEnroe Answers Tennis Questions From Twitter Malcolm Gladwell Answers Research Questions From Twitter Financial Advisor Answers Money Questions From Twitter Stanford Computer Scientist Answers Coding Questions From Twitter Wildlife Vet Answers Wild Animal Questions From Twitter Climate Scientist Answers Earth Questions From Twitter Medical Doctor Answers Hormone Questions From Twitter James Hoffmann Answers Coffee Questions From Twitter Video Game Director Answers Questions From Twitter Robotics Professor Answers Robot Questions From Twitter Scam Fighters Answer Scam Questions From Twitter Forensics Expert Answers Crime Scene Questions From Twitter Chess Pro Answers Questions From Twitter Former FBI Agent Answers Body Language Questions From Twitter...Once Again Memory Champion Answers Questions From Twitter Neuroscientist Answers Illusion Questions From Twitter Immunologist Answers Immune System Questions From Twitter Rocket Scientists Answer Questions From Twitter How Vinyl Records Are Made (with Third Man Records) Neurosurgeon Answers Brain Surgery Questions From Twitter Therapist Answers Relationship Questions From Twitter Polyphia's Tim Henson Answers Guitar Questions From Twitter Structural Engineer Answers City Questions From Twitter Harvard Professor Answers Happiness Questions From Twitter A.I. Expert Answers A.I. Questions From Twitter Pizza Chef Answers Pizza Questions From Twitter Former CIA Chief of Disguise Answers Spy Questions From Twitter Astrophysicist Answers Space Questions From Twitter Cannabis Scientist Answers Questions From Twitter Sommelier Answers Wine Questions From Twitter Mycologist Answers Mushroom Questions From Twitter Genndy Tartakovsky Answers Animation Questions From Twitter Pro Card Counter Answers Casino Questions From Twitter Doctor Answers Lung Questions From Twitter Paul Hollywood & Prue Leith Answer Baking Questions From Twitter Geneticist Answers Genetics Questions From Twitter Sneaker Expert Jeff Staple Answers Sneaker Questions From Twitter 'The Points Guy' Brian Kelly Answers Travel Questions From Twitter Master Chef Answers Indian Food & Curry Questions From Twitter Archaeologist Answers Archaeology Questions From Twitter LegalEagle's Devin Stone Answers Law Questions From Twitter Todd McFarlane Answers Comics Questions From Twitter Reptile Expert Answers Reptile Questions From Twitter Mortician Answers Burial Questions From Twitter Eye Doctor Answers Eye Questions From Twitter Computer Scientist Answers Computer Questions From Twitter Neurologist Answers Nerve Questions From Twitter Hacker Answers Penetration Test Questions From Twitter Nutritionist Answers Nutrition Questions From Twitter Experts Predict the Future of Technology, AI & Humanity Doctor Answers Blood Questions From Twitter Sports Statistician Answers Sports Math Questions From Twitter Shark Tank's Mark Cuban Answers Business Questions From Twitter Marvel’s Spider-Man 2 Director Answers Video Game Questions From Twitter Criminologist Answers True Crime Questions From Twitter Physicist Answers Physics Questions From Twitter | Tech Support Chess Pro Answers More Questions From Twitter Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
112 | 2,023 | "55 Best Podcasts (2023): True Crime, Culture, Science, Fiction | WIRED" | "https://www.wired.com/story/best-podcasts" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Simon Hill Gear The Best Podcasts for Everyone Photograph: Polina Lebed/Getty Images Save this story Save Save this story Save Podcasts are to radio as streaming services are to television, and we are lucky enough to be living through the golden age of both.
You can find a podcast about almost anything these days, but with great choice comes great mediocrity—you might need a helping hand to find the podcasts worthy of your ear. Our expertly curated list will entertain and educate you, whether you’re doing the dishes, working out, commuting, or lazing in the bath.
For more advice, check out our guides on how to listen to more podcasts and the best podcasts for kids.
If you’re feeling entrepreneurial, read our recommendations on the gear you need to start a podcast.
Updated March 2023: We added several podcasts, including Your Undivided Attention, Mobbed Up: The Fight for Las Vegas, Dead Eyes, and My Therapist Ghosted Me, plus a new health and wellness section.
Best Tech Podcasts Best Society Podcasts Best Culture Podcasts Best True-Crime Podcasts Best Science Podcasts Best Economics Podcasts Best Business Podcasts Best Celebrity Interview Podcasts Best Sports Podcasts Best Movie Podcasts Best TV Podcasts Best Fiction Podcasts Best History Podcasts Best Food Podcasts Best Health and Wellness Podcasts Best Comedy Podcasts Special offer for Gear readers: Get a 1-year subscription to WIRED for $5 ($25 off).
This includes unlimited access to WIRED.
com and our print magazine (if you’d like). Subscriptions help fund the work we do every day.
If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Courtesy of ABC News Sneak a peek behind the curtain, as this podcast follows the trials and tribulations of Elizabeth Holmes and Theranos , the tech startup that promised to disrupt blood testing but disintegrated in the face of whistleblowers, inaccurate results, and fraudulent claims.
John Carreyrou’s reporting broke the scandal, and his book Bad Blood also spawned another interesting podcast.
But The Dropout is a refreshingly clear recounting of the sordid tale, with season two tackling the trial.
Apple Google Courtesy of Darknet Diaries Anyone with an interest in hacking and cybercrime will appreciate this investigative podcast from Jack Rhysider. Densely packed and tightly edited, the show covers topics like Xbox hacking, a Greek wiretapping Vodafone scandal, and the impact of the NotPetya malware.
Rhysider skillfully weaves informative narratives to unravel some complex issues and keeps things mostly accessible, though it may occasionally get a little too technical for some folks.
Apple Google Courtesy of Center for Humane Technology Ex-Googler Tristan Harris, who you may recognize from the Netflix documentary The Social Dilemma , talks with Aza Raskin about the dangers of living your life online. Cofounders of the Center for Humane Technology , they delve into the ethics of Big Tech, unpack the potential pitfalls, and try to imagine ways to harness technology for the good of humanity.
Apple Google Courtesy of Dallas Taylor Painstakingly researched, this podcast dives deep into the world of sound to explain everything from those sounds you always hear in movie trailers to car engines, choral music, the Netflix intro, and way beyond. Learn how iconic sounds were created, why certain sounds make us feel the way they do, and how sound enriches our lives in myriad ways.
Apple Google WIRED’s Gadget Lab : Want to catch up on the week’s top tech news? Listen to our very own podcast hosted by senior writer Lauren Goode and senior editor Michael Calore.
The Lazarus Heist : This captivating investigation starts with the Sony hacks , digs into the involvement of North Korean hackers, and moves on to a billion-dollar cyber theft.
Rabbit Hole : What is the internet doing to us? New York Times tech columnist Kevin Roose investigates things like the impact of algorithms on radicalization with a dreamy soundscape backdrop.
Reply All : The beautifully paced, always convivial, and sorely missed Reply All dragged us down internet rabbit holes to investigate long-forgotten songs, phone scammers, hacked Snapchat accounts, and Team Fortress 2 bots.
Click Here : With a focus on cybersecurity, this podcast unravels tales of hacking, misinformation, cyberterrorism, and more, with interviews and insight from experts in episodes that usually come in under half an hour.
Waveform : Laid-back chats about the latest gadgets and developments in the world of tech with Marques Brownlee (MKBHD) and co-host David Imel.
Courtesy of Audible Jon Ronson brings an inquisitive, empathetic, and slightly neurotic intelligence to bear on fascinating and often surprising tales. Following The Butterfly Effect ( only on Audible ), which delves into the collision of tech with the pornography industry, The Last Days of August investigates the untimely death of porn performer August Ames. All of Ronson’s other podcasts are equally excellent (we recommend Things Fell Apart and So You’ve Been Publicly Shamed ), but this is a great place to start.
Apple Google Courtesy of Apple Famous German duo Siegfried and Roy were a mainstay on the Las Vegas show scene and performed about 30,000 times over five decades with an act that included white lions and tigers. When Roy was attacked live on stage, it made headlines everywhere. This podcast unravels their rise to stardom, touches on their controversial handling of wild animals, and digs into what really happened that fateful night.
Apple Google Courtesy of Pushkin Industries In this eclectic mix of quirky stories, Malcolm Gladwell tackles misunderstood events and rarely discussed ideas, veering from subjects like Toyota’s car recall to underhand-throwing basketball legend Wilt Chamberlain, and even the firebombing of Tokyo at the end of World War II. Gladwell freely mixes research and opinion and enjoys challenging conventional views, but every episode serves up facts and stories you have likely never heard before.
Apple Google Run Bambi Run : The riveting story of ex-Milwaukee police officer and Playboy Club bunny Laurie Bembenek, who was convicted of murdering her husband’s ex, despite conflicting evidence, and subsequently escaped prison and fought to have her conviction overturned.
Missing Richard Simmons : Ebullient fitness guru Richard Simmons used to be everywhere, and this podcast charts an investigative reporter’s attempts to find out why he disappeared.
The Moth : This podcast offers random folks the chance to tell deeply personal stories to a crowd of strangers and reinforces just how weird and wonderful humans are.
The Trojan Horse Affair : This tale unpacks the British scandal over an alleged attempt by Islamist extremists to take over a Birmingham school and radicalize its students.
Day X : A sobering look at the neo-Nazi specter in modern-day Germany, its possible infiltration of police and government, and a plan involving a military officer and a faked refugee identity.
Project Unabom : Delving into the life of Ted Kaczynski, this podcast interviews his brother and recounts the FBI investigation to try to make sense of Kaczynski’s terrifying bombing spree.
Will Be Wild : Curious about the January 6 insurrection? This podcast interviews people from both sides, examines the struggles of law enforcement and intelligence under Trump, and charts the anti-government extremism that led to this dark day for democracy.
Courtesy of Imperative Entertainment The online shoe store Zappos made Tony Hsieh a billionaire, and this podcast investigates his $350 million investment in the Downtown Project in Las Vegas. His utopian vision of a happy worker village promised to revitalize the depressed heart of Sin City. The experimental community generated much excitement, but the charismatic and eccentric Hsieh soon ran into trouble.
Apple Google Courtesy of Novel Part of the way into this investigation of the Rain City Superhero Movement, a real-life group of self-proclaimed superheroes active in Seattle a few years ago, I had to stop listening and check that this wasn’t fiction. The podcast focuses on the arrogant Phoenix Jones, an ex-MMA fighter turned violent vigilante, and his fall from grace. But there is also a fascinating glimpse into the friendlier side of the movement, with some heroes handing out water to homeless folks and helping people in distress.
Apple Google Courtesy of The LoudSpeakers Network Brutally honest comedians with chemistry, Kid Fury and Crissle West recap and review the latest pop culture news and offer their opinions on everything. Insightful, funny, challenging, and refreshingly different from the podcast pack, these sprawling conversations run for a couple of hours, covering recent events and frequently touching on social justice, mental health, race, and sexual identity.
Apple Google Courtesy of Forever35 Like eavesdropping on conversations between relatable besties, Forever35 started as a physical self-care podcast but expanded to discuss mental health, relationships, and any other topic that appeals to LA-based writers Doree Shafrir and Kate Spencer. They go from chatting about serums and creams to seasonal affective disorder and how to deal with a new stepmother as an adult—but always in a fun, inclusive, and down-to-earth way.
Apple Google Sounds Like a Cult : Fanatical fringe groups have never been so prevalent, and there’s something more than a little cultish about celebrity stans, multilevel marketing, and marathon runners—just three of the subjects this lighthearted podcast unpacks.
Armchair Expert with Dax Shepard : Now a Spotify exclusive, this often funny and always insightful podcast seeks out human truths and sometimes finds them.
Geek’s Guide to the Galaxy : Ably hosted by author David Barr Kirtley, this sci-fi fantasy extravaganza digs into fascinating topics with the help of accomplished guests like Neil Gaiman, Brent Spiner, and Steven Pinker.
The Allusionist : If you are interested in words, this witty but accessible show will delight you as it charts the evolution of slang, explains euphemisms, and generally celebrates language.
Courtesy of Las Vegas Review Journal This fascinating tale, told through interviews with old gangsters, law enforcement, politicians, and journalists, charts the symbiotic rise of organized crime and Las Vegas. The first season recounts the FBI’s attempts to take down the "Hole in the Wall Gang" and reveals the true-life inspiration for movies like Casino.
Season two tackles Jimmy Hoffa and the battle to oust the mafia from the Strip’s casinos.
Apple Google Courtesy of Vox Media Soothing host Phoebe Judge unravels captivating tales with reverence in this polished production about the spectrum of crime. Criminals, victims, lawyers, police, historians, and others whose lives have been altered by crime voice their stories as Judge carefully avoids the sensational and exploitative by respectfully teasing out the heart of each subject.
Apple Google Courtesy of WBEZ Give this compelling mystery five minutes and you’ll be hooked. The talented host, Brian Reed, investigates a small town in Alabama at the behest of eccentric horologist John B. McLemore, who claims the son of a wealthy family has gotten away with murder. The script, pacing, editing, music—basically everything about this production—are perfect.
Apple Google Courtesy of Lava For Good Painstakingly researched, thoughtfully told, and skillfully produced, this true-crime podcast hosted by Gilbert King focuses on a 1987 Florida murder. After an incompetent police investigation and distinctly dodgy trial, Leo Schofield was convicted of killing his wife. Despite fresh evidence and a confession from someone else, Schofield remains in prison.
Apple Google Courtesy of Campside Murder may dominate this genre, but there are other fascinating stories worth telling in the world of crime, like this one, which is about a scammer posing as a Hollywood mogul. This weird, compelling, investigative podcast unwinds a satisfyingly twisty tale that’s mercifully free of blood and violence. The third season, Wild Boys , tells a completely new story, and the fifth tackles hypnotist Dr. Dante.
Apple Google Who Killed Daphne : Investigative journalist Daphne Caruana Galizia was murdered by car bomb in Malta, and this podcast delves into her work exposing the unscrupulous elite to identify her killers.
The Clearing : The families of serial killers often seek obscurity (understandably), but that means we never hear their stories. That’s something this podcast about April Balascio, daughter of American serial killer Edward Wayne Edwards, rectifies.
The Trials of Frank Carson : Police and prosecutors go after the defense attorney who has been beating them in court for years, sparking accusations of conspiracy and one of the longest trials in US history.
Sweet Bobby : This British catfishing tale charts successful radio presenter Kirat’s relationship with handsome cardiologist Bobby, and things get impossibly weird.
Dr. Death : A gripping podcast that focuses on incompetent or psychopathic (maybe both) ex-surgeon Christopher Duntsch and exposes terrifying institutional failures.
Crimetown : Taking a forensic approach to organized crime in American cities, this slick podcast comes from the supremely talented makers of The Jinx.
Hunting Warhead : A journalist, a hacker, and some detectives go after a chilling child abuse ring led by a criminal known as Warhead in this tactfully told and thorough podcast.
Love Janessa : Catfishing scams are big business, but why do so many use photos of Janessa Brazil? This podcast tracks her down to find out.
The Evaporated: Gone With the Gods : Journalist Jake Adelstein dives deep into Japanese culture, pursuing his missing accountant and exploring the mysterious disappearances of thousands of people in Japan every year.
Courtesy of Aubrey Gordon & Michael Hobbes The worlds of wellness and weight loss are awash with questionable products and advice, so a podcast to debunk fads and junk science with reasoned argument and research is welcome. It’s more fun than it sounds, thanks to the entertaining hosts, and there’s even a fascinating episode on “snake oil” that recounts the history of health scams.
Apple Google Courtesy of NPR An absorbing deep dive into human behavior with the help of psychologists, sociologists, and other experts, Hidden Brain is densely packed with informative nuggets. The host, NPR’s accomplished science correspondent Shankar Vedantam, renders complex ideas accessible and offers insight into the inner workings of our minds.
Apple Google Courtesy of BBC This whimsical show, hosted by physicist Brian Cox and comedian Robin Ince, poses questions like “Does time exist?”—which are then debated by a diverse panel of three guests, usually a mix of experts and entertainers. Definitive answers are in short supply, but it’s always articulate, enthusiastic, and thought-provoking.
Apple Google Science Rules! : Bill Nye, the science guy, teams up with science writer Corey Powell to grill experts on all sorts of interesting science-related topics.
Stuff You Should Know : Prizing knowledge for its own sake and provoking healthy curiosity, this podcast is comical, charming, and full of interesting conversational nuggets.
Courtesy of NPR This Planet Money spin-off delivers digestible, fast-paced, well-told stories about business and the economy, tackling topics that range from TikTok marketing to opioid nasal sprays and ticket scalpers. Each enlightening episode comes in under 10 minutes and serves as a quick primer that will leave you feeling well informed.
Apple Google Courtesy of Freakonomics Radio Network Promising to delve into the “hidden side of everything,” this long-running, data-driven show is hosted by Stephen J. Dubner, coauthor of the Freakonomics books, and it regularly features economist Steven Levitt. It’s a clever mix of economics and pop culture that flows easily and balances entertainment with education, presenting both sides of debates while consulting relevant guests.
Apple Google Courtesy of Macro Musings If you long to understand the economy better, this topical show, hosted by David Beckworth of the Mercatus Center, interrogates a diverse line-up of economists, professionals, and academics to bring you invaluable insights. It takes a serious look at macroeconomics and monetary policy, but the guests do a solid job of unpacking complex topics.
Apple Google Planet Money : This top-notch podcast has entertaining, digestible, and relatable stories about the economy, unraveling everything from health care to income taxes.
EconTalk : This no-frills show sees economist Russ Roberts engage in sprawling conversations with writers and academics on a range of economics topics.
Courtesy of Wondery This NPR podcast hosted by Guy Raz explores the stories behind some of the biggest companies in the world from the perspective of the innovators and entrepreneurs who built them. Expect cautionary tales, nuggets of wisdom, and business lessons galore in probing and insightful interviews that reveal a lot about their subjects and what drove them.
Apple Google Courtesy of Steven Bartlett Serial entrepreneur Steven Bartlett built a successful business from nothing and is now an investor on Dragons Den (the UK’s Shark Tank ). He talks frankly about his own experiences and interviews various CEOs to find out why they started their businesses and how they guided them to success. Sprawling discussions range from personal life challenges and mental health to business strategies and advice.
Apple Google Courtesy of TED/Audio Collective Expertly hosted by organizational psychologist Adam Grant, this podcast offers practical advice on tackling various issues you are sure to encounter in the average job. The show features interesting psychological perspectives on everything, from how to rethink a poor decision to crafting a great pitch to dealing with burnout. The podcast also boasts insightful interviews with business leaders.
Apple Google The Pitch : Fans of Shark Tank will enjoy this podcast, which features entrepreneurs pitching investors to secure real money for their startups.
Ask Martin Lewis : Personal finance guru Martin Lewis has been helping folks in the UK save money for years and provides straightforward financial advice here.
BizChix : This podcast from business coach Natalie Eckdahl is aimed squarely at female entrepreneurs and is packed with no-nonsense expert advice.
Teamistry : With a focus on teams and what they can achieve, the latest season of this podcast tells the fascinating story of the supersonic passenger jet Concorde.
Courtesy of Adam Buxton Consummate conversationalist Adam Buxton is always witty and well prepared, and he has interviewed many interesting people over the course of his long-running show, from Charlie Brooker to Jeff Goldblum. Ostensibly rambling, Buxton skillfully pulls fascinating insights from his interview subjects, bouncing between their personal lives, work, and popular culture with seeming ease.
Apple Google Courtesy of Wondery Likable actor Justin Long and his brother Christian host this enthusiastic and sprawling interview show, where they chat with guests like Zack Snyder, Kristen Bell, and Billy Crudup. The siblings get sidetracked by nostalgic reminiscences and occasional bickering, which sort of makes the show, but they are always generous and kind to their guests.
Apple Google Courtesy of Wondery Charming and goofy, this conversational podcast stars Jason Bateman, Will Arnett, and Sean Hayes, and they always have a surprise celebrity guest, like Ryan Reynolds or Reese Witherspoon. It is warm, gentle, and often laugh-out-loud funny, but don’t expect challenging questions or bared souls.
Apple Google WTF With Marc Maron : Self-deprecating, sardonic, supremely skilled interviewer Marc Maron interviews some of the world’s most famous people, from Barack Obama to Paul McCartney.
Grounded With Louis Theroux : A soothingly gentle facade belies Louis Theroux’s ability to draw fascinating insights from his subjects with tact and humor.
Where There’s a Will, There’s a Wake : Kathy Burke laughs in the face of death, asking guests like Stewart Lee and Dawn French how they’d like to die, what sort of funeral they want, and who they plan to haunt.
Courtesy of Wondery Epic rivalries and long-anticipated showdowns are a massive part of the enduring appeal of sports, and this slick production homes in on them. Rivalries like Federer vs. Nadal in tennis and Tyson vs. Holyfield in boxing are unpacked over a few episodes apiece by host Dan Rubenstein, who digs into their backgrounds to understand why some face-offs get so highly charged.
Apple Google Courtesy of The Ringer This hugely popular sports podcast features fast-paced roundtable conversations with athletes and celebrities that usually focus on the NFL or NBA. Unfiltered opinions, witty remarks, and encyclopedic sports knowledge collide, but this is enthusiastic and accessible enough for casual sports fans to enjoy.
Apple Google Courtesy of The Athletic Primarily focused on baseball, this long-running podcast sometimes covers other sports and often meanders into comical conversations. Guests offer amusing anecdotes, but the chemistry between hosts Joe Posnanski and Michael Schur, who can debate endlessly about any old nonsense, is what makes this show so special.
Apple Google Undr the Cosh : Open and honest banter from ex-professional soccer (football) players, as they talk to current pros and recount hilarious on- and off-pitch anecdotes.
Around the NFL : This funny, fast-paced look at the National Football League runs through all the latest football news, blending anecdotes and analysis.
32 Thoughts : A slickly produced, insightful dive into all the latest hockey news and controversy from knowledgable hosts who bounce off each other.
Courtesy of Earwolf We have all asked this question of a movie at some point, but hosts Paul Scheer, June Diane Raphael, and Jason Mantzoukas invite guest creatives to engage in heated and hilarious chats about some of the worst films ever. Movies that are so bad they are entertaining, from Face/Off to Junior to The Room, are dissected and thoroughly ridiculed.
Apple Google Courtesy of BBC Respected film critic Mark Kermode has an infectious love of movies and an incredible depth of knowledge about the world of film, and Simon Mayo is a veteran radio presenter. Together they discuss the latest movies, interview top-tier directors and actors, and invite views from their listeners. While the podcast ended earlier this year, the duo have a new show called Kermode & Mayo’s Take.
Apple Google Courtesy of You Must Remember This Diving into Hollywood myths to investigate and uncover the truth about infamous secrets, scandals, and legends from Tinseltown is a compelling premise, and talented creator and host Karina Longworth makes the most of it. Among the best shows are the “Dead Blondes” series, which includes Marilyn Monroe; the run on Manson; and the “Frances Farmer” episode.
Apple Google The Director’s Cut : Listen to directors like Benicio del Toro, Steven Spielberg, and James Cameron being interviewed about their latest movies by their peers in roughly half-hour episodes.
The Rewatchables : Bill Simmons and a rotating cast of cohosts discuss and analyze beloved movies and dig up interesting nuggets of trivia.
Lights Camera Barstool : Reviews, interviews, rankings, and accessible chats about the movies with pop culture debates thrown in.
Black Men Can’t Jump [in Hollywood] : This comedic movie review podcast highlights films featuring actors of color and analyzes the movies in depth, with an eye on race and diversity.
Courtesy of Headgum Join comedian and actor Connor Ratliff on his mission to discover why he got fired from Band of Brothers.
His amusing and honest account of how his big break went bad, reportedly because Tom Hanks thought he had “dead eyes,” is often very funny. An easy listen, peppered with celebrity guests like Seth Rogen, Elijah Wood, and Zach Braff, Dead Eyes affords listeners an insight into the world of auditions, acting triumphs, and humiliation.
Apple Google Courtesy of HBO Whether you’re new to this captivating show or a long-time fan, the official podcast affords you a peek behind the curtain as it dissects episodes and explores character motivations. Roger Bennett interviews the main players from the show and then Kara Swisher steps in for the third season to interview the makers and various guests, from Mark Cuban to Anthony Scaramucci, to examine its impact and where it mirrors world events.
Apple Google Courtesy of Wondery Recounting the tragic tale of the exploitative 2004 reality TV show There’s Something About Miriam , this podcast reveals just how cruel reality TV can get. Six young men set up house in an Ibizan villa to compete for the affections of Miriam and a £10,000 ($12,100) cash prize, but the show producers failed to tell them Miriam was trans. It’s a story that ended badly for everyone.
Apple Google Courtesy of Steve Schirripa Hosted by actors from the show, Michael Imperioli (Christopher Moltisanti) and Steve Schirripa (Bobby Baccalieri), this podcast is essential listening for fans. It runs through every episode with big-name guests, most of whom worked on or appeared on the show. It’s candid about the entertainment industry and absolutely packed to the brim with behind-the-scenes anecdotes and insider revelations.
Apple Google Shrink the Box : Actor Ben Bailey Smith talks with psychotherapist Sasha Bates as they put some of the best TV characters of all time (like Walter White and Omar Little) on the couch for analysis.
Obsessed With… : This BBC podcast is hosted by celebrity superfans of various TV shows, including Killing Eve , Peaky Blinders , and Line of Duty.
Fake Doctors, Real Friends : Rewatching Scrubs with Zach Braff and Donald Faison is a joyous experience that’s every bit as entertaining, poignant, and silly as the TV show.
Welcome to Our Show : A warming dose of nostalgia and comfort for New Girl fans as Zooey Deschanel, Hannah Simone, and Lamorne Morris rewatch the show together.
Courtesy of The Paragon Collective Horror fans will enjoy reliving the last gruesome moments of various corpses that have landed at the mysterious Roth-Lobdow Institute in this deliciously creepy and occasionally gross chiller. Wonderful narration from Lee Pace; acting from the likes of Denis O’Hare, Missi Pyle, and RuPaul; and clever sound design make for a memorably thrilling ride that you just know is going to end badly.
Apple Google Courtesy of Hello from the Magic Tavern Thoroughly absurd, this fantasy improv-comedy show is the brainchild of Chicago comedian Arnie Niekamp, who falls through a portal at a Burger King and ends up in the magical world of Foon. The role-playing game and fantasy references come thick and fast, guests play bizarre characters of their own creation, and loyal listeners are rewarded with long-running gags and rich lore.
Apple Google Courtesy of Battle Bird Productions Short and sweet episodes of this sci-fi comedy-drama fit neatly into gaps in your day and whisk you away to a nightmare corporate dystopia in a galaxy fraught with evil artificial intelligence and monstrous aliens. Struggling repair technician Kilner gets stuck with a rich murder suspect, Samantha Trapp, after accidentally smuggling her across the galaxy in this polished show with a distinct 1980s feel.
Apple Google DUST : This podcast started as an anthology of audio sci-fi stories from the likes of Philip K. Dick and Ray Bradbury but has changed things up with each new season.
The Bright Sessions : The therapy sessions of mysterious psychologist Dr. Bright, bookended by voice notes, form intriguing short episodes, as all of her patients seem to have special abilities.
Welcome to Night Vale : This pioneering creepy show is presented as a community radio broadcast from a desert town beset by paranormal and supernatural happenings.
Courtesy of Vox Media Utopian ideals have led to the development of some fascinating communities over the years, and season one of Nice Try! delves into their history, the hope that drove them, and why these communities ultimately failed. Season two moves on to lifestyle technology, from doorbells to vacuums, all designed to help us realize a personal utopia in the ideal home.
Apple Google Courtesy of Revolutions The modern world was shaped by some of the ideas that drove revolutions, and this deeply researched series runs through the English Civil War and American, French, Haitian, and Russian revolutions; Simon Bolivar’s liberation of South America; and more. The writing is concise, the narration is engaging, and host Mike Duncan does a fantastic job contextualizing revolutionary events and characters.
Apple Google Courtesy of Radiotopia A dreamy, emotional quality elevates these tales of seemingly random moments from the past, expertly told by the eloquent Nate DiMeo and backed by wonderful sound design. These distilled stories serve as historical snapshots of rarely discussed events, and it’s hard to think of another podcast as artful and poignant as this one.
Apple Google Courtesy of Grim Mild Assured in their divine right to rule over everyone, royal families were often incredibly dysfunctional. Author Dana Schwarz examines tyrannical regimes, murderous rampages, power struggles, and dynasty deaths. The madness of monarchs from various nations is concisely dissected in tightly scripted half-hour episodes that will leave you questioning the idea that there’s anything noble about their bloodlines.
Apple Google Something True : Enjoy utterly bizarre true stories, as every episode of this podcast explores a seemingly forgotten historical footnote.
Lore : Spooky and witty, this classic podcast plumbs history to uncover horrifying folklore, mythology, and pseudoscience.
Medieval Death Trip : An enthusiastic and well-researched look at medieval times, this podcast offers a witty analysis of the primary texts left behind.
Hardcore History : Relatable and endlessly fascinating, Dan Carlin brings history to life with his own riveting narratives on notable events and periods, peppered with facts and hypothetical questions.
Courtesy of Ramble Whatever side of the titular, age-old debate you stand on (I’m with the British Sandwich Association ), this fast-paced, often funny show will suck you in as it poses tough food-related questions and then debates them. Chefs Josh Scherer and Nicole Enayati decide whether American cheese is really cheese, if Popeye’s and In-N-Out are overrated, and what the best pasta shape is.
Apple Google Courtesy of Gastropod If your love of food extends to an interest in the history and science of everything from the humble potato to a soothing cup of tea to ever-polarizing licorice, then this podcast is for you. Knowledgeable cohosts Cynthia Graber and Nicola Twilley talk to experts and serve up a feast of delicious bite-size facts that surprise and delight.
Apple Google Courtesy of The Ringer Celebrity chef Dave Chang, whom you may know from his Netflix show, Ugly Delicious , talks mostly about food, guilty pleasures, and the creative process with other chefs and restaurateurs. There is plenty here to satisfy foodies, but some of the funniest moments come when the show covers other random topics, like the perfect email sign-off or wearing shoes indoors.
Apple Google Out To Lunch With Jay Rayner : This podcast seats you at a top restaurant to eavesdrop on consummate food critic Jay Rayner with a celebrity guest at the next table.
The Sporkful : You can learn a lot about people and culture through food, and this podcast proves it by serving up delectable bite-size insights.
Courtesy of Lionrock Whether you are struggling with addiction, childhood trauma, eating disorders, or something else, or you know someone who is, this accessible and inspirational podcast can help you examine why. Host Ashley Loeb Blassingame speaks from experience and offers practical advice to help you onto a healthier path. This podcast is honest, insightful, and emotional but ultimately heartwarming and uplifting.
Apple Google Courtesy of LYT Yoga Hosted by Yoga leader and physical therapist Lara Heimann, this podcast is a mix of Q&A sessions, interviews with experts, and motivational advice. It focuses on understanding your body and mind, but you will also find practical advice for chronic pain sufferers and different kinds of injuries, explanations on why and how yoga is good for you, and firsthand accounts of the positive impact yoga has on many lives.
Apple Google Courtesy of Great Love Media Each episode sees psychiatrist Mark Goulston interview a notable person about the wakeup call moment that changed their path forever. He encourages them to interrogate what sparked their drive, made them want to be a better person, and led to their success. Some guests are better than others, but the podcast is closing in on 500 episodes, so there are plenty to choose from.
Apple Google The Big Silence : Host Karena Dawn has conversations about mental health with an eclectic mix of therapists, psychologists, and ostensibly successful folks.
Spiraling With Katie Dalebout and Serena Wolf : Candid chats about anxiety with advice on how to cope. The relatable hosts are open and honest about the anxious feelings that modern life can evoke.
Huberman Lab : Host Andrew Huberman, a professor of neurobiology and ophthalmology at Stanford School of Medicine, interviews various experts to offer advice on optimizing your health and fitness.
Courtesy of Global Player Irreverent Irish chat with comedian Joanne McNally and TV presenter Vogue Williams as they put the world to rights. It feels like eavesdropping on brutally honest best pals as they discuss relationships, work woes, health issues, awkward social situations, and sometimes recent news. The down-to-earth pair liberally dole out a mix of sound and questionable advice that is frequently laugh-out-loud funny.
Apple Google Courtesy of Shiny Ranga Comedians and friends Tom Davis (the Wolf) and Romesh Ranganathan (the Owl) chat aimlessly and expertly poke fun at each other for around an hour. It’s often nostalgic, sometimes offers decent advice for listeners, and is always warmhearted and laugh-out-loud funny.
Apple Google Courtesy of Team Coco Perennially single stand-up comedian Nicole Byer is every bit as charming and funny here as in Netflix's Nailed It baking show, but this podcast delves into some adult subjects. Byer is disarmingly open about her insecurities and struggles and seamlessly stirs in vulgar humor. She also hosts hilarious conversations with guest comedians.
Apple Google Courtesy of Athletico Mince Ostensibly a soccer (football) podcast, this surreal show is brought to life by lovable British comedy legend Bob Mortimer, with support from sidekick Andy Dawson. Tall tales about real footballers, complete with strange voices and fictional personalities, are mixed with songs, silly inside jokes, and rambling conversations. You don’t really need to know anything about soccer to enjoy it.
Apple Google Locked Together : Only on Audible, this show features lockdown chats between comedian pals like Simon Pegg and Nick Frost or Rob Delaney and Sharon Horgan.
My Neighbors Are Dead : The wonderful premise of this hit-and-miss improvised show is interviews with lesser-known characters from horror movies, like the caterer from Damien’s party in The Omen and the neighbors from Poltergeist.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Gear Wish List: 47 Awesome Gifts for All the Enthusiasts, Connoisseurs, and Fanatics in Your Circle WIRED Staff Gear The PlayStation Portal Turns Your PS5 Into a Handheld, Sorta Eric Ravenscraft You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Contributor X Topics podcasts audio streaming buying guides Shopping Scott Gilbertson Boone Ashworth Scott Gilbertson Reece Rogers Virginia Heffernan Carlton Reid Matt Jancer Boone Ashworth WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
113 | 2,020 | "In a World Gone Mad, Paper Planners Offer Order and Delight | WIRED" | "https://www.wired.com/story/in-a-world-gone-mad-paper-planners-offer-order-and-delight" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Quinci LeGardye Backchannel In a World Gone Mad, Paper Planners Offer Order and Delight Photograph: Jessica Pettway Save this story Save Save this story Save Back in April, deep into a YouTube budget-planning rabbit hole—an attempt to minimize my pandemic agitation by exerting what control I had over my own corner of the world—I came across a woman named Alaina. She was walking viewers through the planner she had created, showing the debt-reduction tracker and the financial goals page, talking about how to create a daily, quarterly, and yearly money routine. I was fascinated; her method of tracking every aspect of her finances was so different from my approach, which involved avoiding it altogether until I received a credit card bill or a low balance alert.
I watched all of her budgeting videos. Eventually, I clicked on one titled “How I Use My Happy Planner.” Her hands were moving quickly over the pages, turning them, pointing out different sections, gesturing along with her explanations. She had clean, tiny handwriting and used cute stickers: a cloud in a dark blue bubble for a chance of rain that day, a little wallet next to “budget review,” a little laptop on her schedule, across from a to-do list. Then, onward to the business planner.
Wait, I thought, you have more than one planner? She had eight : catch-all, business, budgeting, home, personal, faith, notes, and reading.
Hypnotized, I watched her flip through all the planners, trimmed down into sections of a binder and hole-punched to fit on shiny metal rings. I was astounded by the discipline required, the amount of control she had over her time and task list. She could turn to a page in a book on her desk and know exactly what to do with the next hour of her life. I wondered if I'd just stumbled on the most productive person in the world.
Alaina Fingal (@theorganizedmoney) organizes her compartmentalized mind in Frankenplanners.
Photograph: Akasha Rabut "Once I became a mom, a wife, an entrepreneur, that's when I realized I needed something more customizable, where I could kind of plan each area of my life." Photograph: Askasha Rabut When I was a kid, I would observe aunties, teachers, and movie heroines—what they did every day, how they moved through the world—seeking a glimpse of what adult life was like. I saw glamour, accomplishment. Alaina (@theorganizedmoney) reminded me of that vision. I liked getting to know her by seeing what made up her day as an accountant, entrepreneur, and mom.
Was the sense of control I saw in her videos learnable? I searched a phrase that kept coming up in the titles of Alaina’s videos: “plan with me.” A vocabulary revealed itself. A world of planner obsessives opened up. Plan With Me’s, I soon discovered, were videos of people demonstrating the art of decorating and accessorizing their bound paper planners. The pages came in many layouts: horizontal, with seven paragraphs of plain notebook lines; vertical, with three blank boxes descending down the page for each day; dashboard, with lists for what to do and what to buy each week; hourly, with timelines from 5 am to 10 pm. Now to decorate. You might go spare, and just lay down a few icon stickers for work meetings and the kids’ activities. Or you could go ornate, with dozens of colorful boxes and flower stickers. Your approach might depend on the space you need for planning and the format of the page. Are you working on a typical weekly spread or a travel plan with packing checklists? A memory page with personal pictures or a blank week you might use to practice hand-lettering? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The decorative planner babes were the women (most planners I encountered were women, but there are also planner men) who decked out their planners with so many stickers that the lines on the pages disappeared, hidden by colorful boxes that could handle their short lists and reminders and also coordinated with the flowers, leaves, animals, fruit, or colorful shapes that matched the week’s theme. Functional planners favored layouts that featured more ink than stickers; they would time-block their days in hourly layouts, scheduling when they would work, eat meals, exercise, watch Netflix, meet friends. They would still add a few stickers, though, because “making it look pretty makes [you] want to look at it.” I knew early on I wasn’t a decorative planner. I had tried bullet journaling in a dotted notebook at the beginning of the year but stopped in March when I lost my ongoing freelance work at the beginning of the pandemic and all of my plans dried up. I was bingeing Plan With Me videos during a period when my depression was really bad and I was taking a break from freelance work, so my sense of hourly or daily time had dimmed. Getting my brain in order required more than a blank dotted page; I needed functional layouts more than free space.
I watched videos explaining planning techniques, walking viewers through how planners make special pages, break down big projects into tinier tasks, plan actionable goals. You can also watch reviews of new releases, which come out nearly constantly—either small creators releasing sticker designs every month or the big companies coming out with collaborations with brands and seasonal releases, at which point the brands and their marketing squads put out flip-throughs, and enthusiasts rush to snap them up from the companies’ websites or Michael’s or Joann or Hobby Lobby before they sell out. Influencers do sponsored posts and offer affiliate codes and giveaways between more personal videos.
YouTube is great for explanations on specific layouts and techniques, but Instagram is the place to share pictures of weekly spreads and to converse with other planners. There are so many on Instagram, hundreds of pictures of spreads posted by power planners, walls of color and pen making up their feeds. And on Facebook, casual planners ask for advice on planning and life, small planner company founders and sticker shop owners ask for feedback, and everyone shows off their planner carts and pen collections and meetups and this is the cutest sticker ever and you go girl! and there’s also a convention and podcasts for all this— Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “It’s a lot,” planner influencer Desiree Perez tells me after explaining how involved she is in the planning community, what I call Planner World. She decorates multiple spreads a week, runs popular Instagram and YouTube channels, promotes for Happy Planner, one of the largest planner brands, works a full time job as an administrative assistant, and presumably sleeps at some point. “It is a lot, but I really, really enjoy it so much.” Desiree Perez (@happy.2.plan) was introduced to planners on a trip to a Michael's craft store.
Photograph: Amanda Lopez "I didn't understand all the stickers and scrapbooking stuff, so I left that alone. Then I went on YouTube, and then I saw ‘Oh, planning is a whole different world." Photograph: Amanda Lopez As I absorbed the details of planning culture, I kept expecting to suddenly want to turn away. In the past, when I’ve discovered subcultures that had whole languages and practices (think BTS Army or Big Brother fandom) I would ultimately write them off as not for me, that it would take too much to learn. Plus, I was (and remain) pretty skeptical about all those pretty spreads and inspirational “plan a better you” quotes. Surely it was a veneer? I wondered how much real talk could really exist in a world of constant self-improvement, especially one in which the primary outreach platform is everything-is-perfect Instagram. Mostly I was afraid that I would see a planner post about racism or depression or fatphobia, and someone would respond that they were being “too negative.” But I kept digging anyway.
And the planner world is huge. Over the past decade, planning has grown into a giant online community, with 5.5 million mentions for #planneraddict and 4 million mentions for #plannercommunity. Paper planners, which are commonly thought of as schoolyard tools sold in Target aisles, make up a multimillion-dollar industry. The most recent figure—and it’s safe to assume the numbers have only grown—has the planner industry showing $342.7 million in sales in 2016. Planning grows from a productivity tool to a hobby to a lifestyle for thousands of women every year. These women gain a sense of control in a chaotic world by planning as much of their lives as they can. Even in a year where no plan is safe from the pandemic, and no industry is safe from racial uprising, life doesn’t stop, and planners gonna plan.
The history of planning is the history of journaling, storytelling, pen, paper, scheduled events, anticipation. Someone, long ago, made a note of something that was going to, or was supposed to, happen. The first recorded American use of a planner as a tool dates back to Colonial America, when Founding Fathers including George Washington would weave blank pages into almanacs , those annual collections of calendars, weather forecasts, tools for finance calculations, political essays, and planting dates, calculated based on the movement of the planets. Washington would keep various diaries dedicated to specific journeys, along with daily logs detailing his difficulties planting tobacco and notes on his slaves and employed artisans. These components were incorporated into daily planners, first in 1773 by Robert Aitken’s self-proclaimed first American daily planner, then in basic ones carried by Union soldiers, then in the Wanamaker Diary , sold by the eponymous department store from 1900 to around 1971. The Wanamaker included historical facts, poems, recipes, seating charts for popular theaters, and dates for social events across the country, as well as advertisements for the store and the brands it sold.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Paper planners and the advertising around them skewed determinately male during the ’60s and ’70s. Imagine something similar to a Moleskine or a Leuchtturm: black leather, coil- or spine-bound, blank except for lines delineating months and weeks and maybe boxes for check marks. Or something for students, with an attached ruler and printed multiplication tables. Brands like FranklinCovey sold planners as efficiency, needed for the business executive to get ahead. Women needed efficiency too, of course, but managing the kids, cleaning, and household finances wasn’t the industry’s priority.
When the ’80s shoulder-padded career woman emerged to take over the corporate world, she carried a Filofax binder, with a colorful cover and pockets and an address book section and no-nonsense inserts—smartphones for the corded-phone era. In the ’00s, women started creating their own planners, and planning became conflated with the paper crafts and scrapbooking industry, making planning an aesthetic exercise as much as an intellectual one. Women added photos and stickers to the blank areas of their planner pages to infuse them with more of their personal life and memories, and eventually entire planners were dedicated to home and personal life. Creativity and art is now entwined with paper planning so much that bullet journaling has even been coopted by artists. Search #bulletjournal on Instagram, and you see more posts of hand-drawn layouts and arabesques than of the sparse to-do system invented by Ryder Carroll.
Now that books, calendars, and work itself (thanks Zoom) are almost fully digitized, the rise of paper planners seems inevitable. Planner fans use iCal and Google calendars, too, of course, for the purpose of sharing schedules, but digital alerts and to-dos that disappear after they’re completed make the sense of accomplishment just as evanescent. Writing a task down on paper helps it stick in the brain, and a long list of crossed-out to-dos shows the day’s accomplishment.
The women (and men) in the decorative planner industry grew up on Lisa Frank notebooks and Lilly Pulitzer. The books they use are high-quality, thick paper that take pens and paint with no bleed-through. When I buy books, I rub my hands over the covers, and I delight in turning the pages; I once bought three boxes of a pen I liked; in high school, I created a Choose Your Own Adventure book as a final project: This habit seems made for me. I might find coral and neon-green accents slightly ridiculous, but I can be swayed by a pretty flower sticker. Plus, even though a “Just Be Happy” sticker feels like a challenge—like every stressor can easily be ignored and succumbing to it is my fault—a saying like “Just Trust the Process” can sneak up on me and lift my mood.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It took awhile to get into the habit of writing everything in my brain down, but now I start my day by opening the planner, seeing what I already have to do, and where I can fit in tasks that I enjoy. It gives me a sense of care and luxury. And it feels like meditation: My head empties and clears of stress. And the act of placing a sticker, placing the bookmark, feeling the lack of scratch as the pen glides over smooth paper, also feels like care. The care that went into designing the planner, caring for my schedule, caring about my mental state. When I write down my to-dos and schedule out my day, it’s a way of being nicer to myself, not having to rush tasks I forgot or stress that I’m not getting enough done.
Most planners got into paper planning during a time of personal upheaval, or when their schedule became overly hectic. They tell stories of cross-country moves, demanding jobs, military schedules, and loss of a spouse. They all turned to planning either to help their mental health or to get a handle on their lives.
“The only reason I got into planning was, at the time, I was working two jobs, and for one of my jobs the hours were all over the place,” says Perez, who is a member of the brand Happy Planner’s 2020–2021 Squad. Through the Squad, she receives products from Happy Planner to promote, and she gets mentoring from veteran Squad members. “I was really missing creativity. I was working, working, everything was technology—you're on computers all day long. I just missed having some kind of creative element in my life, and planning gave me that creativity back, but it also gave me the function that I really needed to get things, like, in order, so it was win-win, since I'm doing something functional, but it's also fun.” The Plan With Me videos Perez posts consist of her decorating her planners (usually vertical layouts) with stickers based on a weekly theme. She uses an elaborate decorating process, testing sticker placement on see-through wax paper before placing them onto the layout, cutting off ends that go past the border with an X-acto knife. By the end of each video the page is fully decorated, and she stops before filling in the page with her plans, disregarding the functionality of the planner and leaving the viewer awash in aspiration. Maybe their planner can look that beautiful too. Watching her decorate and hearing her thought process is very soothing, the way makeup tutorials are soothing: I reach a place of calm by focusing on watching her decorate, and it activates my brain’s creative center. It makes me want to write, or draw, or cook.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Perez, who has 30,000 followers on Instagram and 11,000 on YouTube, has a day job, but the planning she features on her channel is mostly about personal errands. Before entering Planner World, I had only ever equated planning with work, whether the stuff I had to accomplish within a 9-to-5 or the projects I have to juggle as a freelancer. The only list I had ever made for anything personal was a grocery list or a list of books I wanted to buy. Once I bought a printed packing list, but I never filled it out. Planner babes, though, get to the point where they’re scheduling what time during the week to have free time. The first time I saw a time block for laundry, something in my brain imploded. That’s one thing about merging into a community: If there’s something you can’t do or don’t understand, and everyone else around you can, you feel like there’s something wrong with you. It took a few months of looking at videos of beautiful house-cleaning-routine spreads and feeling overwhelmed at the idea of doing an hour’s worth of domestic work every day before I accepted that all I have the capacity to plan right now is work and my budget, and that’s OK.
I also can’t deny that consumerism is a huge part of planning. Even though planning at its purest is about functionality and creativity, it’s also powered by a retail machine, where companies, both small and large, need to keep churning out product and influencers need to keep generating content. And it’s not like the multiple planners and sticker books are free. A lot of Planner World is about being a better you—accomplishing your goals. But it’s also about having the best stickers or the biggest pen collection … or buying a new type of planner that will finally bring you peace. There’s an inherent FOMO in seeing multiple influencers (or Squad members) decorate with the same stickers, and it took a few months for me to suppress the urge to buy a new botanical sticker book for $19.99, or another planner that I didn’t need.
The collective hope of the community is that, at the bottom of the planner rabbit hole, planner babes will emerge as more effective, relaxed, and fulfilled versions of themselves. Many planners say that writing down their schedules and tasks frees up their brains and takes away their anxiety. That’s what I was hoping for when I first got sucked into the idea of planning my life to the last second. I eventually realized that I value flexibility, and having a hard stop time for completing a task makes me freak out too much. I don’t want to plan my entire life, but having my work and finances figured out has given me one less thing to worry about in the middle of an unending pandemic, when the world’s on fire.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For a community that prides itself on fostering friendships between people from “all walks of life,” Planner World still has its blind spots. There is no official demographic data for planners, but the most popular influencers and the heads of the biggest companies tend to be white. Of course there are planners of color and BIPOC-led companies with huge followings. But during the social upheaval in the wake of George Floyd’s death, more and more Black planners started speaking out about inequality within the planner community. Their feelings of being passed over felt harder to ignore.
Megan Payne and Myra Powell have been planner friends since Powell introduced Payne to planning in early 2019 when they were working together at an insurance company. Even after the two left the company, Payne, who now works as a teacher, would be the person Powell called to discuss happenings in Planner World (new releases, company drama, planner layouts), and vice versa. The two started their discussion podcast, Planners and Wine , soon after the racial upheaval taking place around the world reached the planner community. They wanted to speak out.
According to Payne, there’s a lot of diplomacy in the planner community, and people are often afraid to step on each others’ toes or criticize companies they’d like to work with in the future.
“So we finally got to the point right around the time that George Floyd was murdered that we were like, forget this. If nobody else is going to say it, we need just hop on and say it,” Payne says.
For Megan Payne (@megsgotaplan), her me-time is spent in her planning room.
Photograph: Temi Thomas "My husband knows: Don't come in here, don't bring me my daughter. Like, I'm in here. It definitely helps me keep my sanity and just gives me time." Photograph: Temi Thomas As with many communities, you can look to Instagram to see the effect of the Black Lives Matter movement on Planner World. Many planner companies and influencers participated in #BlackOutTuesday, and planner influencers started posting Black Lives Matter planner spreads in the days after. Companies primarily promoted Black planners, and influencers bought from Black-owned planner companies and sticker shops, branching out from their loyalties to specific brands.
This is a sharp change from what Powell has felt as a Black planner trying to grow her YouTube channel to reach more viewers. Before May of this year, she would notice that white planners tended to have larger followings and more opportunities for sponsorship. Though she would remind herself that her channel would grow at its own pace, it was difficult for her to see influencers who started around the same time surpass her.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "The word that I can describe the planning community for women of color was just invisible.
We kind of combined together, and we support each other, but there would be plenty of times where you would see a white planner babe get started around the same time you did and skyrocket. Tens of thousands of followers within the same time frame doing the same stuff. Most of the time you don't even see our faces, we’re just showing our planners, but the black planner babe will still be at a thousand followers,” Powell says.
Myra Powell (@myraplansit) turned to planners soon after the birth of her son.
Photograph: Da'Shaunae Marisa "I was like, 'Oh crap, I don't have my life together. Let me at least try to write down some stuff.' For most of us, our life has gotten a little bit away from us, and we just need something to put it all together." Photograph: Da'Shaunae Marisa There’s no way to determine why a casual planner follows certain influencers. Contributing factors could include using their preferred planner, having the same style, enjoying their stickers. But even in a video focused on a bound stack of paper on a plain desk, bits of personhood creep in: jokes show personality; voices are higher or lower, with vocal fry or accented. And so much shows in the hands: wrinkles, marks, rings, polish.
When I started watching planner videos, I upped my hand care. I’d never focused on my own hands moving before; my eyes follow the words I type or write. But after watching Plan with Me’s, I can no longer assume that the hands are the invisible instruments for words. Or art, building, cooking. They’re always there, and to onlookers the skin color may always influence their opinion of the planner.
“I think it has to do a lot with both conscious and unconscious bias," Payne says. "Sometimes [companies are] intentionally choosing white women over women of color and Black women, because they don't see us in the community, even though there's no reason not to see us. They have just chosen not to see us. And sometimes it's that unconscious bias.” Multiple criticisms and scandals shook the planner community after Floyd’s death, all of them touching on that question of conscious or unconscious bias. Black planners who’d had enough of being ignored scrutinized multiple companies’ marketing squads. Those who had similar or the same number of BIPOC planners year after year were accused of recruiting based on quotas. Happy Planner was criticized for an Instagram post that planners believed used language similar to All Lives Matter; it later apologized and deleted the post. American Crafts, a paper craft company that makes stickers and washi tape, was put on blast for not having any BIPOC presenters on its past 10 annual squads. After the criticism, American Crafts added three Black planners to its 2020–21 design team.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Then Erin Condren Designs nearly got canceled: Its namesake founder helped organize a graduation march for her children’s high school class that went against social distancing guidelines and occurred amid the Black Lives Matter protests. Condren assured in an apology posted on Instagram that the march was “in no way registered, associated with, or guised as a [Black Lives Matter] protest.” However, it was a clear workaround to an LA County ban on gatherings that had effectively canceled the school’s traditional graduation ceremony.
According to Payne, who’s been planning since early 2019, dilemmas that come up regarding Black representation is a wider-spread problem than most would admit.
“It's a lot of these other companies who are not run by women of color or Black women. They didn't even realize until it was too late. And now they're having to harshly come to the conclusions that, if they just would've had somebody like us in the room who could have explained it to them or who they could have listened to, they could have avoided all of this. So that's their own consequences,” Payne says.
Now that planner companies are reckoning with calls for more diversity, support of Black-owned companies and Black influencers has gone up. You could argue they are reaping the benefits of the demand for more Black representation, but that’s not a planner-specific concern. With what I’ve garnered through casual conversation and Twitter, a lot of Black people are questioning whether it’s really good to have increased opportunity when it was prompted by police brutality and white guilt. Is the new follower or sponsorship genuine support? Or another type of objectification? As I write this, I imagine so many voices, including my own, saying, “I mean, I’ll take it, but … ” As time goes on, as protests become the new norm, activists fear that Black Lives Matter will be diminished to just a slogan; they are concerned that antiracist movements within various communities and industries will survive only if the right voices stay loud and keep fighting. For planners, that’s voices like Planners and Wine or the companies that actively amplify and partner with Black creators.
Then there’s the question of whether this really is the new normal, whether once the economy opens and the protests get less press, everything will go back to the way it was before. Will the move to virtual existence and this summer’s social upheaval become a forgotten anecdote, a themed notebook that sits in the back of the drawer? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Payne doesn’t think so. “We wouldn't let it go back to how it used to be with no representation,” she says. “We just absolutely would not let it go back to that. And I don't think anybody that we know, even our allies, would just let it go back to business as usual.” I’m cautiously optimistic about the future of the planner community. The planners I’ve spoken with are open to the “real talk” I’ve been seeking: They acknowledge the issues in Planner World and refuse to sweep concerns about representation, consumerism, and toxic positivity under the rug. Planner founders and CEOs seem sincere about the importance of diversity. But the choice is still mine: If this moment becomes ephemeral, if “just be happy” becomes the standard response to racism, I will stop supporting whoever takes that stance. There are so many planner companies that I can choose those that are owned by women of color who actively support these issues. No matter what happens in the community, my day’s going to start with opening a paper planner.
📩 Want the latest on tech, science, and more? Sign up for our newsletters ! The cyber-avengers protecting hospitals from ransomware The women who invented video game music The turmoil over “Black Lives Matter” at Coinbase Some ecologists worry about rooftop honey bee programs Ad tech could be the next internet bubble 🎮 WIRED Games: Get the latest tips, reviews, and more 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Topics longreads Instagram Facebook YouTube Andy Greenberg Angela Watercutter Brandi Collins-Dexter Steven Levy Lauren Smiley Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
114 | 2,023 | "Master and Dynamic MH40 Review: Beautiful Austerity | WIRED" | "https://www.wired.com/review/master-and-dynamic-mh40" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Ryan Waniata Gear Review: Master & Dynamic MH40 Facebook X Email Save Story Photograph: Master & Dynamic Facebook X Email Save Story $399 at Amazon $399 at Master & Dynamic If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED Rating: 8/10 Open rating explainer Style, build quality, and sound. These are the core essentials in the new MH40, Master and Dynamic’s latest update of a classic that goes back to the New York City-based audio brand’s early days as a market disrupter in 2014.
It’s not a lavish formula for a pair of $400 wireless headphones in 2023 , especially compared to models loaded with modern features like Sony’s WH-1000XM5 (9/10, WIRED Recommends).
But these aren’t your average pair. With a dead-gorgeous design built from elements like anodized aluminum, lambskin, and titanium, the MH40 look and feel different than the monolithic plastic shells of most rivals. Their obstinate minimalism in the face of the current trend is almost freeing, especially since the trade-off for loads of features is brilliant sound and construction designed to last.
The MH40 skip a lot of extras, but their biggest transgression is a lack of noise canceling or transparency mode, which are all but prerequisites at this price. You can get both features in M&D’s step-up pair, the MW75 ( 8/10, WIRED Recommends ), for $200 more. The price and lack of ANC means that the MH40 wouldn’t be my first choice for most folks, but the headphones’ sterling sound and head-turning style could be hard to pass up for those with style who don’t want noise canceling, or who simply are willing to pay for premium headphones that stand out from the crowd.
Photograph: Master & Dynamic Pulling the MH40 from the box, you can’t help but smile. They’re just beautiful cans, especially in our review unit’s burnt-brown leather (they’re also available in four other colors, including solid black). The latticed exterior screens reflect the light like ripples on a sunlit lake. The metal chassis feels at once elegant and robust, thanks to solid base materials matched by a speckled aluminum finish.
Polished industrial posts at the sides provide smooth action and numbered settings for the ear cups as you slide them in place. Even the lambskin-cloaked pads feel classy, set on magnets for easy removal and replacement. The pads also offer one of the MH40’s best attributes: good noise isolation that kills a lot of sound around you when you add a bit of music. I can’t hear my keystrokes as I type this review, for instance. That’s a great thing for a pair that lack noise canceling.
The headphones are fairly comfortable, thanks to plenty of memory foam along the ear cups, and with their quality leather skins, they should become softer and more tailored to your head as they wear in. They aren’t as comfy as Sony’s older WH-1000XM4 or new XM5, at least not yet, but few headphones are. My biggest complaint is the dearth of padding on top, which can wear on your head after a few hours. But the MH40’s light weight (around 280 grams) keeps this mostly in check.
Master & Dynamic MH40 (2023) Rating: 8/10 $399 at Amazon $399 at Master & Dynamic If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED Pulling off the ear pads reveals the gleaming new 40-mm titanium drivers beneath, aimed at improving treble and bass response over the 2019 model—part of the justification for the new model’s $100 price rise. Unfortunately, you won’t find any sensors for auto-pause, which is one in a substantial list of premium features you’ll get in rivals in the space from Sony, Bose, Sennheiser, and plenty of others.
Instead, you’ll be relegated to (gasp) manually pausing audio from the three-button control center on the right ear cup’s exterior circlet. The rubberized keys aren’t as fancy as touch controls, nor as satisfying as the metallic beads in the discontinued MW65 (which I still own), but they are intuitive. Volume, playback, and voice-assistant commands are all easily navigable during wear, with few fumbles. Set just below the command center, the power/pairing key lets you pair up to two source devices at once for easy swapping between the two, which is a nice feature for those of us who use headphones on laptops and cell phones alike.
There are a few basic control options in the M&D Connect app. You can monitor the MH40’s respectable 30-hour battery life, turn on side tone for calls, adjust the timing for the auto-off feature, and fiddle with a few EQ presets—and that’s about it.
Absent are options like speak-to-talk or other pausing features, and of course deeper settings for features that they lack like adaptive noise canceling or transparency mode. A lot of these are conscious design choices, but I think M&D should at least include a multiband EQ. Luckily, the sound is good enough to make that last beef a minor one.
Photograph: Master & Dynamic Maybe it’s the visual trickery of good branding, but I’ve always thought about M&D’s sound signature as a sonic representation of its headphone designs. There’s usually luxurious detail, smooth and stylized dimensionality in the soundstage, and a bright metallic edge that brings some extra vibrance and excitement to instrumental attacks.
Master & Dynamic MH40 (2023) Rating: 8/10 $399 at Amazon $399 at Master & Dynamic If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED The MH40 mostly hold to my mental impression of the brand’s sound. They offer fantastic clarity and balance, with a pulse of exuberance at the top and bottom of the frequency curve, and a sculpted cut to mid-range instruments that lets them dig especially well into crunchy guitars, taut percussion, and splashy brass.
To Master & Dynamic’s credit, there does seem to be a notable step-up in sound quality in the latest model, with plenty of rich detail to discover in your favorite tunes. It’s the kind of performance that allows you to lose yourself in the textures of instruments and the reflections of effects like reverb and echo, while also discovering subtle nuances you’ve missed in previous listens.
Cymbals ring with shimmering resonance, letting you bask in the different colors as the drummer rattles the sticks through tunes like Snarky Puppy’s “Jefe.” Bass is rich and full, without becoming overbearing. If and when it does get there, you can dampen it with the Bass Cut preset, or deepen it with Bass Boost (though each are a bit heavy-handed).
When compared directly to Sony’s WH-1000XM4, the MH40 match or outdo them across genres, seeming to add more readily discoverable details in songs like Beck’s “ Paper Tiger ,” where the aggressive strings almost jump through the ear pads in visceral expansion.
Air’s “ Alpha Beta Gaga ” sends rippling metallic effects through the ether with striking accuracy. Even The Weeknd’s Starboy shines with a richly defined bass line and notable echoes in the lead vocal that bob through the stereo image in ways I’ve missed in multiple previous listens.
At times, I found the more bright sound could use a little relaxation, and I wished I could dampen some frequencies by a few decibels. The stereo image also feels a bit narrower than on some of my favorite headphones in their class, like the Sony WH-1000XM5 and Sennheiser Momentum 4. But it wasn’t something that jumped out at me, and the excellent instrumental separation lets you deeply explore the soundstage.
Headphone traditionalists will appreciate the ability to plug in directly with the MH40’s suite of accessories, which includes a USB-C cable and USB-A adapter for up to 24-bit/96-kHz resolution from supported sources, which raises their performance even further. There’s also a short but useable 3.5-mm cable that connects to the MH40’s lone USB-C port and works without power, meaning these headphones should be good to go long after the battery does.
This is a package aimed at those who care more about the look, sound, and reliability of their headphones over the long term than the advantages of modern tech or a deep toolkit of features. If that sounds like you, the MH40 are very enticing headphones with style for miles. I am sure you’d get plenty of great years of listening out of a pair.
Master & Dynamic MH40 (2023) Rating: 8/10 $399 at Amazon $399 at Master & Dynamic If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED $399 at Amazon $399 at Master & Dynamic Topics review Reviews Headphones Shopping Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
115 | 2,023 | "8 Best Photo Printing Services (2023): Tips, Print Quality, and More | WIRED" | "https://www.wired.com/story/best-photo-printing-services" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Scott Gilbertson Gear The Best Photo Printing Services Photograph: BremecR/Getty Images Save this story Save Save this story Save Suburban America used to contain roughly one 1-hour photo lab for every 500 people. Little kiosks were sprinkled across strip mall parking lots like pepper on a bad steak. Then came the digital camera, and suddenly there was no film to develop. Those kiosks abruptly disappeared, taking our photo printing options with them. Developing film isn't commonplace today, but the desire to have a photograph as an object has never faded. In place of the 1-hour photo booths, there are endless online printing services, most of which produce far better results than the kiosks ever did. Unfortunately, some of them are truly awful at printing your images.
To make sure you don't end up with prints of your kids with orange skin against green skies (yes, that happened in one test), we assembled a collection of photos designed to test color, tonal range, blacks, whites, and more, and fired them off to dozens of services. Here are the best places to print your photos. All prices are for standard 4 x 6 prints. For more immediate results, be sure to check out our Best Instant Cameras and Printers guide.
Updated March 2023: We've added our thoughts on printing books at Mixbook, business cards at Moocards, and photo storage and printing options from SmugMug.
Special offer for Gear readers: Get a 1-Year Subscription to WIRED for $5 ($25 off).
This includes unlimited access to WIRED.
com and our print magazine (if you'd like). Subscriptions help fund the work we do every day.
If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Photograph: Mpix Buy at Mpix When my kids were born I wanted to make sure they, like me, inherited a shoebox full of faded family photographs. I bought a film camera but decided the film was too expensive, so I sold that and bought a DSLR instead. I started using Mpix to print everything. The results have never disappointed me. Mpix is an offshoot of Miller's Professional Imaging (a pro-only printing service), and the pedigree shows in the print quality.
Mpix prints on Kodak Endura paper and offers a variety of paper options. I tested the E-surface, which renders rich, deep blacks and very true-to-life colors. It holds up well over time; images I printed in 2013 look exactly like they did when I got them.
The website is simple to use. You can import images from the most popular social networks and photo-backup services like Dropbox, Facebook, Google Drive, and OneDrive. (Unfortunately, Instagram isn't on the list.) Once your images are in your Mpix account, you can order prints in virtually any size, including options tailored to images for your phone (4 x 5.3 inches, for example). There are also options to print on canvas, wood prints, and more.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Gear Wish List: 47 Awesome Gifts for All the Enthusiasts, Connoisseurs, and Fanatics in Your Circle WIRED Staff Gear The PlayStation Portal Turns Your PS5 Into a Handheld, Sorta Eric Ravenscraft It's not the cheapest service, but Mpix frequently has sales. Unless you're printing something as a gift and need it now, I suggest waiting until prices dip.
Starting at 36 cents per print Photograph: Printique Buy at Printique The highest-quality prints in my testing came from Adorama's Printique service, formerly called Adoramapix. Choosing between Printique and Mpix was one of the toughest calls I've had to make in this job. In the end, I went with Mpix because you get free shipping, and frequent sales make it cheaper, but if printing quality is your only concern, Printique wins by a hair. A part of the reason is its options: You can choose a range of papers, and they're listed by their actual names like Kodak Endura or Fujifilm Matte. I also like the option to print the date and file name on the back of each image.
Printique can quickly end up on the pricier end, but the extra money gets you much better prints. I went for the Kodak Endura Luster paper (which is also what Mpix uses). The colors are true to life, with rich blacks and good details in both shadows and highlights.
Another place Printique shines is in the photo-upload process. You can import images directly from your computer or from an array of other places, including Dropbox, Facebook, Flickr, Google Photos, Instagram, and Lightroom.
Starting at 32 cents per print Photograph: Snapfish Buy at Snapfish If you don't have a lot of money to spend, but you still want good-looking prints, Snapfish delivers. Snapfish doesn't offer the same quality of prints as our top picks, but it's less than a third of the price, and the results are not bad.
You can upload images from your computer or phone, or import them directly from social media (Facebook, Flickr, Google Photos, or Instagram). The web interface is easy to use, though as with most of the cheaper services, you'll be constantly bombarded with upsells for books, mugs, and more. Some of these turn out to be fun (see below), but it's still annoying.
I was surprised by the quality of prints from Snapfish considering the price. They're better than what I got from several other services (not reviewed here) that charged more than double. Snapfish also has excellent prices on some more left-field printing options, like coffee mugs.
I recently made my kids some mugs using photos of drawings they'd made. The results were fun, though I definitely wouldn't expect these prints to hold up to the dishwasher. Still, for $2 (with a coupon during the holidays), it's hard to go too wrong. The full price on these is technically $13, but Snapfish frequently offers coupons that bring it down to about $4, sometimes lower. Don't pay more than $6.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Gear Wish List: 47 Awesome Gifts for All the Enthusiasts, Connoisseurs, and Fanatics in Your Circle WIRED Staff Gear The PlayStation Portal Turns Your PS5 Into a Handheld, Sorta Eric Ravenscraft Starting at 9 cents per print Photograph: Shutterfly Buy at Shutterfly I've used Shutterfly to create everything from calendars to books and have been happy with the results, but the company's prints are not the best.
The tonal range is good, shadows don't disappear into pure black, and at the white end of the spectrum, clouds retain plenty of detail. But the prints have a flat look to them and the paper is flimsy compared to our top picks. I also found the constant upselling on the website tiring. Every time you upload photos, even if you've already said you want to make prints, Shutterfly interrupts the purchase process to say, “We've turned your images into a book,” forcing you to dismiss this unwanted dialog just to get to the thing you actually want to buy.
Given the subpar purchasing experience and lack of outstanding results, I recommend Shutterfly only for prints if you're on a tight budget since it is cheaper than Mpix or Printique. Where Shutterfly excels are those books it’s always trying to sell you. I've been happy with the results of both books and calendars.
Starting at $20 per photo book Mixbook : This came highly recommended by some friends and it does have nice book designs and templates, and an easy-to-use online book-making tool. Unfortunately, I did not love the results. Colors were often washed out and blacks were not the deep rich blacks I was expecting. I did not like it as much as books I've printed with Shutterfly or Mpix, though it is cheaper than both, so if you're on a budget this isn't a bad choice.
Photograph: Nations Photo Lab Buy at Nations Photo Lab Nations Photo Lab prints on quality paper, and the packaging is the best of the bunch. It's hard to imagine anything ever happening to your images in transit the way the company secures them, although shipping times are among the slowest.
While the prints are high quality, I found that many times—especially with landscapes—colors are washed out. Highlights, especially bright white clouds against a blue sky, lack detail compared to the same images from Printique. The results for portraits are much better. Nations' color correction does an excellent job with skin tones, and it produces the best portrait-style prints of the services I tested.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Gear Wish List: 47 Awesome Gifts for All the Enthusiasts, Connoisseurs, and Fanatics in Your Circle WIRED Staff Gear The PlayStation Portal Turns Your PS5 Into a Handheld, Sorta Eric Ravenscraft What I really dislike about Nations is the website. It's slow and sometimes difficult to navigate (and I never could get it to give me a receipt). If you want to upload a lot of photos to Nations, the far better option is to use the third-party app ROES (Remote Order Entry System).
It's a Java-based desktop app that, once set up, greatly improves the experience.
Starting at 32 cents per print Courtesy of Google Buy at Google Photos If you're all-in on Google Photos, the simplest way to get artifacts in your hands is the built-in printing service. Google offers a few printing options for users of Google Photos. We don't recommend the prints; the quality is about the same as what you'd get at Walgreens or CVS, which we also don't recommend. However, a Google printing service that's available in the US, Canada, and Europe—and something we can highly recommend—is a photo book.
I used Google Photos to print a photo book made up of my favorite shots from a 2019 trip to Mexico City. First, I curated a selection of a few dozen photos inside the Google Photos app, collecting them into an album and organizing them into the rough running order I want to see them in the book. When I opened that photo album in Google Photos, a little shopping bag icon appeared at the top of the page. Clicking on it started the book-building process. I chose the cheapest option, a 7-inch-square softcover book, which is $15 for the first 20 pages and 50 cents for each page beyond that. (Larger hardcover books start at $30 for 20 pages, with additional pages costing $1 each.) The interface for designing a book is simple, but you can organize your photos in some creative ways. I set up most of my pages with the photos floating in the middle, leaving a thick white border around them. For some, I chose a full-bleed option, which makes the photo run all the way to the edges of the page. (In those cases, I got to select how the photo would be cropped, which was nice.) I shuffled the order of the photos with Google's drag-and-drop interface and found that juxtaposing the two layout styles (matte and full bleed) on facing pages made the results look almost professional. The resulting book arrived within a week. It feels nice, with thick, satin-finish covers, a square-bound spine, and very minimal Google branding on the back cover.
Google Photos does compress images when you upload them to the cloud, keeping them under 16 megapixels. But on my small, 7-inch softcover book, I can't see any pixelation or digital artifacts in the pictures. About half my shots were from my Pixel phone with a 12-megapixel sensor, the other half from a nice Ricoh point-and-shoot with a 24-megapixel sensor. The photos in my book look nice and sharp, and I can't tell they are compressed. — Michael Calore Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Gear Wish List: 47 Awesome Gifts for All the Enthusiasts, Connoisseurs, and Fanatics in Your Circle WIRED Staff Gear The PlayStation Portal Turns Your PS5 Into a Handheld, Sorta Eric Ravenscraft Starting at $15 for a photo book Courtesy of SmugMug Buy at SmugMug If you're looking for something that goes beyond making prints of your snapshots, SmugMug is our top pick. It's popular with professional photographers for its online showcases, RAW file storage, and print sales options. You upload your images, put them in a gallery, and can showcase that to clients, and even sell prints directly from those galleries.
SmugMug handles all the details of getting your online images to a print lab. It automatically sends your image to a printer whenever a customer orders a print, which is pretty handy if you're selling your work. Prints in the US are handled through EZPrint labs; in Europe, it works with Loxley. SmugMug is not free though. Access to the basic plan, which gets you unlimited online storage, private galleries, and tight integration with Adobe Lightroom, among other things, will set you back $13 per month.
Starting at $13 per month Photograph: Moo Buy at Moo I covered SXSW for WIRED way back in 2006 and one of the strange things I remember is that everyone I met was handing out these clever little half-size business cards that came from a company named Moo. Moo still offers those cards ( $21 for 100 of them ), but it has also grown into a full-service print shop that can do anything from business cards to custom postcards to water bottles. Moo would not be my top pick for photographs, as that's not really its specialty, but for artwork, invitations, postcards, flyers, and just about everything else, I've been impressed.
I printed some postcards with some custom designs (including photographs and some of my kid's artwork) and was impressed with the accuracy of the colors. All the paper I've tried has been high quality and the color matching is probably the best of all the services I've tried. You can upload your own designs for most things or use Moo's templates, which offer some customization options. That would be my only real criticism—Moo's online tools don't offer quite as many customization options as I'd like. Fortunately, it's easy to do your own work in free software like GIMP and then upload your files as PDFs or JPGs.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Gear Wish List: 47 Awesome Gifts for All the Enthusiasts, Connoisseurs, and Fanatics in Your Circle WIRED Staff Gear The PlayStation Portal Turns Your PS5 Into a Handheld, Sorta Eric Ravenscraft Starting at $21 for business cards and $23 for postcards Amazon’s Photo Printing: This service produced the worst images, not just out of this particular test, but the worst prints I've ever seen. Full stop. The best I can say about it is that it's fast. I had my prints in less than 24 hours. The problem is, of the 25 prints I ordered, eight of them had printing errors. Convinced that a 30 percent failure rate must be some kind of fluke, I fired off another round of 25 (different) images, and this time seven of them were misprinted. That's a kind of progress, I suppose, but not one I would recommend. I didn't bother trying again, and I suggest you avoid Amazon's photo printing service.
Walmart/CVS/Walgreens: Technically, 1-hour photo kiosks didn't die. They wormed their way inside pharmacy chains. There's nothing wrong with these services. They're convenient, and this is still the fastest way to get your images printed as uploaded jobs generally process within a few hours. But the results vary tremendously from one store to the next. Just like the 1-hour services of old, the quality of prints you get depends on what shape the machine is in and how skilled the technician working that day happens to be. You might be able to get good prints at your local store, and it might be worth checking out if you're not happy with other options, but for most people, this isn't going to get the best results.
We used a mix of images that represented a good cross-section of the kinds of photos most of us have. That includes green forests, blue seascapes, browns and grays in city shots, portraits, macro images, close-ups, images with strong bokeh , stacked images with long depth of field, and more.
We didn't limit testing to good images either. We tested plenty of blurry images, photos that were overexposed and washed out, and ones where details might be lost to shadow. In other words, images like most of us have on our phones and in our cameras. Some images came from RAW files we edited in desktop software, others were sent straight from our phones, and we also pulled from social media posts.
Gear The Best Home Depot Black Friday Deals Matt Jancer Gear Apple’s Pledge to Support RCS Messaging Could Finally Kill SMS Boone Ashworth Gear Wish List: 47 Awesome Gifts for All the Enthusiasts, Connoisseurs, and Fanatics in Your Circle WIRED Staff Gear The PlayStation Portal Turns Your PS5 Into a Handheld, Sorta Eric Ravenscraft The latter, while convenient, will get you the worst images. Social media photos are compressed, and, with the exception of Flickr, most do not allow you to access your original uploads, so you're printing from seriously degraded versions. The far better choice is to upload images straight from your phone. It's less convenient, but the extra work is worth it.
Yes, a RAW file taken by a full-frame camera with a good lens is going to print better than anything you get from your phone. But as long as your phone has a decent camera , you're not really going to notice a huge difference in a 4 x 6 print. Even at 5 x 7, it’ll be fine. If you want to go bigger, one trick to "hide" the flaws of a low-quality image is to print on canvas. It's not cheap, but the texture will hide many image artifacts and allow otherwise low-res photos to look good on your wall.
It's a good idea to use some kind of image editing app to add contrast and sharpen your images before you upload them.
Adobe Lightroom isn't cheap, but it's popular with professional photographers. Other good options include Google Photos (under adjustments, look for the "Pop" slider, which is especially helpful), Snapseed , Photoshop Express , and my favorite desktop image editor, Darktable.
You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Senior Writer and Reviewer X Topics Shopping buying guides Photography printers cameras Reece Rogers Louryn Strampe Nena Farrell Lauren Goode Matt Jancer Matt Jancer Boone Ashworth Julian Chokkattu WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
116 | 2,021 | "Google Taps Samsung to Co-Develop Wear OS, Fitbit to Debut New Smartwatches | WIRED" | "https://www.wired.com/story/google-wear-os-io-samsung-fitbit-partnership" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Julian Chokkattu Gear Google Is Finally Taking Smartwatches Seriously Photograph: Justin Sullivan/Getty Images Save this story Save Save this story Save If you can't beat ’em, join ’em. That's Google's strategy for the ailing smartwatch platform it launched back in 2014. To better compete with the likes of Apple , Google has a new three-pronged plan to invigorate Wear OS, and it involves partnerships with two brands it previously competed against in the wearable category: Samsung and Fitbit.
First, Wear OS will launch later this year as a unified platform co-developed with Samsung, merging select features from the Tizen operating system the Korean company uses for its Galaxy smartwatches.
That means future Samsung watches will run Wear instead of Tizen. Second, Google will add more of its own apps to the Wear platform and will update its existing apps to give them more robust capabilities. Finally, Wear's health and fitness features have been rebuilt from the ground up with input from Samsung and Fitbit, respectively, and Fitbit Wear smartwatches are on the way. (Google completed its acquisition of Fitbit earlier this year , so now the Wear team and the Fitbit teams are under the same roof.) The announcement came at Google IO, the company's annual developer conference. The event is virtual for the first time ever, joining a spate of other tech conferences that have avoided in-person gatherings for more than a year.
Tiles are a carousel you can scroll through, positioned next to the watch face. Now any third-party developer can make one.
Photograph: Google Originally named Android Wear when it first debuted in 2014 , the Wear OS smartwatch platform has been made available by Google for any watch manufacturer to use, similar to the arrangement Google has with smartphone manufacturers who want to use its Android operating system. But unlike Android, where a phonemaker can “skin” an Android phone's software to match its brand, companies using Wear didn't have much control over the look and feel of the OS. There wasn't room to tailor the experience to any specific brand. It's likely why Samsung opted to go its own way and develop its own wearable-device software after testing the waters with just one Android Wear smartwatch.
Over the years, Google was slow to introduce new features to Wear OS, and the number of manufacturer partners for the OS dwindled. Samsung, on the other hand, saw success with its Galaxy smartwatches despite the gamble of loading them up with its in-house Tizen OS. However, Tizen has its own weaknesses too—namely, the lack of available apps in Samsung's bespoke app store for Tizen. Wear OS smartwatches may not have been popular, but at least the platform has some desirable apps.
That brings us to the new unified software platform that Google developed with Samsung. It's technically a new version of Wear OS, although Google hasn't yet decided on its name. The company has dropped the “OS” and has started calling it “Wear,” though a spokesperson says we'll see more finality on the name later this year. More importantly, this new version offers manufacturers more flexibility with hardware and software, meaning a Wear smartwatch's interface can be made to feel more consistent with a brand's smartphone and provide a more homogenous experience. A Google-made reference user interface is also available for manufacturers that don't want to make any tweaks.
“We think this will be great for the overall ecosystem,” says Björn Kilburn, Google's lead project manager on Wear. “It'll be good for all devicemakers; it'll be good for developers that we bring these two things together.” Google IO 2021 Gear Team Project Starline Lauren Goode privacy Lily Hay Newman Google also leveraged Samsung's help in making Wear more battery efficient. Most Wear OS smartwatches have historically lasted only a day or two before needing a recharge. Kilburn couldn't offer specifics about battery life gains, which largely depend on the individual smartwatch, but he says many of the workloads that need to run all the time on the watch, like heart-rate sensing, have now been moved to more power-efficient environments in the hardware. You'll soon be able to track your heart rate all day “without killing the battery,” he says.
The platform runs more smoothly too. Kilburn cited up to 30 percent faster performance, with animations and transitions that look more fluid. These tests are based on watches using the “latest chipsets,” but Google did not confirm if that meant it's been testing the Qualcomm Snapdragon Wear 4100 processors that launched last year. Samsung uses its own Exynos processors in its Galaxy smartwatches, so if Wear is optimized for these chips, it might mean more diversity in the smartwatch chipset market (and a potential new revenue stream for Samsung), as the bulk of Wear OS smartwatches are powered by Qualcomm.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Some features from Tizen OS are going to be directly ported over to Wear as well, such as Samsung's watch face designer tool. It will be a part of Wear later this year, and many existing watch faces will make the jump with it. That also means Samsung will no longer make Tizen-powered smartwatches. Its future Galaxy smartwatches will run Wear, and the company says it will continue offering familiar experiences, such as the popular rotating bezel that lets a user navigate the software interface without touching the screen.
The biggest benefit for Samsung? App support. By nixing Tizen, Kilburn says developers don't need to build apps for as many platforms and can largely focus on Wear and Apple's WatchOS, just like how developers currently build mobile apps for Android and iOS. Thanks to some changes Google has made in its development tools, software-makers should now find it easier and faster to build Wear apps. Kilburn says there will be a lot “more investment and innovation coming to consumers in the form of apps.” Jisun Park, corporate vice president and head of mobile platform and solutions at Samsung Research America, echoed that sentiment in an email. “Further collaboration with Google also allows us to expand our ecosystem for developers and partners so that they can take the wearable experience to even greater heights,” Park wrote. As for existing Galaxy smartwatches, Samsung says it's committed to providing Galaxy Store support and three years of software updates since the product's launch. Your existing health data can be exported to newer watches, but more details will come at a later date.
The greater degree of customization in Wear now afforded to manufacturers does come with a price: the responsibility of delivering software updates. It means new features from Google for the platform may not be available to all Wear smartwatches immediately, similar to how new Android features may or may not roll out to older Android phones. Fragmentation has been a major problem with Android as manufacturers have neglected to issue updates, or have been slow to get around to them.
A selection of new designs of some apps in the new version of Wear. This shows offline listening in the YouTube Music app, the Recents menu to see recently opened apps, turn-by-turn navigation in Google Maps, and a Tile from the Calm meditation app.
Photograph: Google The Apple Watch comes with a host of Apple-made apps, each of which offers similar functionality to its respective iPhone app. That's not the case with Google's Wear platform, and it's a shortcoming the company is trying to fix. Kilburn says the team is rebuilding Google's apps in Wear with updated guidelines from Material design , the company's software design language that ensures apps look and behave in a way that's consistent. This strategy will bring new features to the next generation of Wear smartwatches that more closely match the features found in the respective apps on an Android phone.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Google Maps, for example, will offer turn-by-turn navigation in a new user interface that will also work even if your phone is not with you. YouTube Music will finally debut on the platform and will include offline listening (Google says Spotify will also add offline listening). Google Pay support on Wear is expanding to 26 new countries, bringing the total to 37, and will feature a redesign. Many of these changes will arrive later this year alongside the launch of the new Wear, but some updates—like a redesigned Google Assistant—will come in early 2022.
Wear also has some new software navigation tricks. A double press of a button will now instantly switch to a previously open app, and a new Recents menu lets you quickly hop back into recently used apps. Wear's Tiles, which are widgets that sit in a carousel next to the watch face and offer the type of information that can be soaked up with just a glance (like the weather or your next calendar event), are also getting an upgrade: Any third-party developer can now make one.
Double tapping a button on a Wear watch will now take you to the previous app.
Photograph: Google Another big reason Google lags behind the smartwatch competition is its lackluster health and fitness portfolio. In recent years, Apple has added electrocardiogram (ECG) and blood oxygen saturation (SpO2) measurements into its watches, and rumors suggest blood glucose monitoring will be the next new health feature in the upcoming Series 7 smartwatch.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So But Google now owns Fitbit, and it's putting the wearable company's prowess in this area to use. Your next Google-powered smartwatch will come with many of the same features found on Fitbit devices right now, such as health tracking and workout progress, as well as those on-wrist celebrations that provide extra motivation. Fitbit will also produce premium smartwatches running Wear in the future.
However, features like ECG and SpO2 aren't baked into Wear natively. “Any of those more specialized functionalities like the ECG will be up to the manufacturer," Kilburn says. "We've enabled them to bring that kind of innovation to the marketplace, so it would be up to the specific device launch.” Both Samsung and Fitbit offer SpO2 and ECG tracking on existing watches and trackers , so it's likely (though not confirmed) that those functions will still be present when their respective Wear watches debut later.
Kilburn also says Google has also worked with Fitbit (and Samsung) to rebuild the underlying health and fitness framework in Wear to make activity tracking more accurate, and to make it easier for developers to gather and use the tracking data. "In the past, they'd have to go all over the operating system to collect the different pieces of data, but we're bringing them all into one framework they'll be able to use.” Since Fitbit's app will also land on Wear, future Wear smartwatch owners can choose whether to use Fitbit's app or Google Fit to track fitness data. Kilburn couldn't comment on future plans, but says anyone who chooses a Fitbit Wear smartwatch will “continue to have a great experience” with Fitbit.
As for future updates to Wear, Kilburn didn't say if Google will follow a yearly cycle of updates like it does with Android, or how Apple debuts a new watchOS version every year. Instead, expect a more frequent cadence of updates.
It should be noted that while Google announced that it closed its Fitbit acquisition in January following approval from the European Union's antitrust commission—with conditions that Google cannot use the health data of Fitbit users for advertising and has to keep Fitbit and Google data separate—it doesn't mean the US Department of Justice has automatically signed off on it.
A Google spokesperson says it complied with the department's review and the "agreed upon waiting period expired without their objection,” but the DOJ's review is still ongoing, and it has enforcement tools it can utilize if it finds the acquisition harms competition.
Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Nevertheless, even though it's been seven years since Google launched its smartwatch platform, the company doesn't have much to show for it. As of the fourth quarter of 2020, Wear OS counts for a measly 2.7 percent of the market, according to analysis firm Counterpoint Research.
Apple saw a 19 percent growth in global smartwatch sales in the same period and now commands 40 percent of the market share. Samsung jumped to 10 percent market share, and Fitbit was stagnant at 7 percent. Google now owns that 7 percent, but it still needs Samsung to grow the Wear platform.
“Apple obviously dominates, but Samsung is the clear number two player,” says Jeff Fieldhack, research director at Counterpoint Research. “They have brand recognition. They sell by far the most connected devices, which is kind of the trend also—cellular connectivity. By having a modem in it, you can have a standalone device and don't need your companion smartphone." Fieldhack thinks there's a good chance that bringing Samsung, Fitbit, and even fashion brands such as Fossil under the Wear family will spur greater competition and renew developer interest in the platform. “Like smartphones and tablets, as you get higher volumes, costs will go down and you'll get developers behind it more, so the bigger numbers will definitely help Wear OS.” With Samsung and Fitbit set to debut new smartwatches, and rumors of Google making its own Pixel smartwatch , Wear may finally be able to carve out a space in the market as a worthwhile competitor.
“We really believe the smartwatch is a key step in the next evolution in mobile computing," Kilburn says.
📩 The latest on tech, science, and more: Get our newsletters ! They told their therapists everything.
Hackers leaked it all FOMO, Discord, and a new wave of crypto pump-and-dumps Want to grow your own food? Try a hydroponic garden The statistical secrets of Covid-19 vaccines Does a robot get to be the boss of me ? 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 💻 Upgrade your work game with our Gear team’s favorite laptops , keyboards , typing alternatives , and noise-canceling headphones Reviews Editor X Topics Shopping Wearables Samsung Fitness Trackers fitbit Android Wear Apple Watch io Julian Chokkattu Julian Chokkattu Simon Hill Lauren Goode Justin Pot Jaina Grey Simon Hill Simon Hill WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
117 | 2,023 | "13 Best Car Phone Mounts, Chargers, and Accessories (2023): Wireless Chargers, MagSafe Holders, and Dashcams | WIRED" | "https://www.wired.com/gallery/best-car-phone-mounts-chargers-and-accessories" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Simon Hill Gear The Best Car Dashcams, Phone Mounts, and Chargers Facebook X Email Save Story Facebook X Email Save Story If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Please also consider subscribing to WIRED First, Stay Safe Read more Best Dashcam $435 at Amazon (With Rear Camera) A Dash Mount $25 at Amazon A Wireless Charging Mount $50 at Amazon Getting ready for a drive? Whether you use your phone for navigation, music, or podcasts— or are just bringing it along for the ride—the right accessories can make it the perfect passenger. A good car mount will keep it within easy reach and in view, so you don't need to dangerously fumble for your handset and take your eyes off the road. You’ll also want to keep your device charged. Add a dashcam to document your trip. We have tested a range of mounts, chargers, dashcams, and other accessories that might be useful for your daily commute.
Looking for more? Drivers should also consider putting together a Car Emergency Kit and checking out our Best Travel Mugs guide to round out the driving experience.
Updated August 2023: We added dashcams, mounts, chargers, and more from NextBase, iOttie, Belkin, Joyroom, Anker, Peak Design, and Monoprice, among others, and updated prices.
Special offer for Gear readers: Get WIRED for just $5 ($25 off).
This includes unlimited access to WIRED.
com , full Gear coverage, and subscriber-only newsletters. Subscriptions help fund the work we do every day.
Photograph: Nikolai Grigorev/Getty Images First, Stay Safe What to Consider With Car Mounts and Accessories Before we get started, there are a couple of things you need to think about.
Mount or dashcam placement : Wherever you place your phone mount or dashcam, it’s vital to ensure it does not obstruct your view of the road. Many mounts and dashcams allow for dash or windshield placement, but you should check your local laws. (It's illegal to attach mounts to the windshield in many US states.) Dashcams work well behind the rearview mirror if permitted.
Cable placement : Think about where cables will run, and use cables just long enough to prevent tangles and excess. (Read our Best USB-C Cables guide for some recommendations.) Consider how to keep the end of the cable handy. (The best mounts have cable management for this purpose.) If you are using a dashcam, they usually come with a small tool you can use to push the cable into the seams of your car’s interior panels to tuck it away. That can work for charging cables too.
Keep your eyes on the road : Whether setting up navigation, picking a playlist, or doing anything that requires your attention, do it before you start driving. Once you’re on the road, use voice commands or have a passenger deal with any issues, and keep your focus on the road. Distracted driving leads to thousands of deaths every year.
Photograph: Nextbase Best Dashcam NextBase 622GW Dash Cam A good dashcam provides an irrefutable record of any unexpected event that might occur when you’re driving. Video evidence can be helpful in an accident, and dashcams may even reduce your insurance premiums. After testing several dashcams, the NextBase 622GW stands out as the best, with crystal clear video, a parking mode that activates the camera if your car is bumped when parked, and a companion app that makes it easy to review video on your phone. Unfortunately, it is also one of the most expensive options. It costs even more if you add the rear camera, but I recommend it if you’re worried about accidents, as rear-end crashes are the most common.
The adhesive works well but is tough to remove if you ever get rid of this dashcam. The cam slots into a magnetic mount, so it’s easy to clip in and out. A fitting tool pushes cables into the seams of your car’s interior panels. The video goes up to 4K at 30 frames per second, but I found 1440p HD at 60 fps got the best results (1080p at 120 fps is also an option). The footage is clear enough to read license plates, even in low light or bad weather, though I couldn't always see details at night, particularly when it was wet. Still, the night vision and image stabilization elevate this above other dashcams I tested. It has built-in GPS tracking with what3words support. The optional SOS function alerts emergency services if you crash but requires a subscription. I had no trouble connecting my iPhone and Pixel via Wi-Fi and using the NextBase app to review videos, though user reviews suggest some folks ran into issues here. It also boasts Alexa support for voice commands.
$435 at Amazon (With Rear Camera) $400 at Best Buy Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Photograph: iOttie A Dash Mount iOttie Easy One Touch 5 What I like best about this phone mount is that you can use it one-handed. Adjust the bottom feet, and when you place your phone against the trigger button, the arms automatically close around it. To remove it, simply press the release bars. The telescopic arm allows you to tweak the placement, and the ball joint makes it easy to set an ideal angle. This thoughtful design carries over to your charging cable as well—there's a magnetic tab you can attach to the end of your charging cord so it sticks to the back of the mount (so you don’t have to fish around for it).
In my testing, the base with the locking suction cup was very secure, even on bumpy terrain. The downside? Removing the adhesive pad from my dashboard was tricky.
$25 at Amazon $25 at Target $25 at Best Buy Photograph: iOttie A Wireless Charging Mount iOttie Wireless Car Charger This is the mount in my car now, and it maintains everything that’s good about iOttie’s previous mount but adds wireless charging support. You can get it with the suction cup for the dashboard or opt for a CD slot or air vent mount. It closes automatically around your phone, has adjustable feet, a rotating ball joint to angle your phone, and a quick-release bar that pokes out on both sides. The Qi wireless charging can deliver 10 watts to an Android phone or 7.5 watts to an iPhone, and your phone automatically charges when you place it in the mount and start the car. You'll want to make sure your smartphone supports wireless charging in the first place.
All you'll need to do is plug the supplied cable into your car’s power socket, and the other end goes into a MicroUSB port on the bottom of the mount. The car socket end handily includes a second USB-A port you can use to charge another device.
$50 at Amazon $50 at Target Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Photograph: Joyroom Runner-Up Wireless Charging Mount Joyroom Auto-Match Wireless Car Charger Mount While I prefer the look and feel of the iOttie wireless charging mount above, this Joyroom mount has a couple of noteworthy features. Not only does it automatically clamp your phone when you slot it in, but it also adjusts the coil position for best alignment (you must adjust the feet manually with the iOttie). It also has a soft blue light on the bottom edge that’s handy when it’s dark. Charging goes up to 15 watts, but only with certain phones. It charges most Android phones at 10 watts and iPhones at 7.5 watts. It’s affordable and comes with a USB-C to USB-A cable, but you’ll need a charger if you don’t have a USB-A port in your car. You must start the car for the mount to work, but a capacitor ensures you can release your phone even after switching the engine off.
$36 at Amazon Photograph: Belkin A Minimalist Mount Belkin Car Vent Mount If you recoil at the thought of a chunky cradle, you may prefer this sleek solution from Belkin. It’s a svelte, classy-looking silver and black vent mount that grips your phone surprisingly securely. You can rotate it to switch between portrait and landscape, and there’s a handy rubber clip on the back to hold your charging cable in place. It doesn’t work so well with larger phones, but smaller is better because this has no feet to support the bottom of the phone.
★ For larger phones : The Kenu Airframe Pro ($30) has a similar design but can accommodate larger phones and thick cases. It has a ball-and-socket joint that lets you rotate the device 360 degrees and slightly angle your phone for a better view.
$25 at Amazon Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Photograph: Belkin Best MagSafe Vent Mount Belkin BoostCharge Pro Car Charger With a compact design and support for 15-watt wireless charging, Belkin’s BoostCharge Pro is our favorite MagSafe vent mount. The prongs cling to your vent securely, and a powerful array of magnets ensure MagSafe-enabled iPhones don't budge an inch, even on bumpy roads. (It works with the iPhone 12 , iPhone 13 , and iPhone 14 range.) Your mileage may vary with non-MagSafe iPhone cases. There’s also a ball joint, so you can slightly angle your phone for a better view. It's a shame the USB-C cable is permanently attached, as it’s long. There is a plug-in charger for folks without USB-C ports in their car, but I recommend snagging a separate dual or triple charger like the ones below to gain extra ports.
★ Another alternative: WIRED reviews editor Julian Chokkattu really likes the Peak Design Car Vent Mount ($100).
It stays super secure on the vent—there's no wobbling—and his phone stays secure to the magnetic charging pad. It works well with iPhones with MagSafe support, but you can also pair it with a Pixel or Samsung phone if you use Peak Design's Everyday Case.
$100 at Amazon $100 at Apple Photograph: iOttie A MagSafe Dash Mount iOttie Velox Pro Magnetic Wireless Cooling Charger If you prefer a mount on your dash or windshield, this classy MagSafe mount from iOttie is a smart pick for folks with an iPhone 12 or later. It attaches to a dashboard pad or windshield with a suction cup that proved secure in my testing. The telescopic arm combines with a ball joint to give you a wide range of movement to find the ideal position. Sadly, it maxes out at 7.5 watts for charging, but I like that the USB-C charging cable is removable, so you can detach and stow it when your iPhone is topped up. There is also a built-in fan to help keep the temperature down when the sun is out.
$75 at Amazon Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Photograph: Scosche A Magnetic Mount Scosche MagicMount Pro Charge5 If you are keen to get a magnetic mount but don’t have a MagSafe iPhone , try this system from Scosche. It comes with a metal plate you can stick to the back of any phone or slip inside your case, allowing it to magnetically stick to the mount. (It does also work with MagSafe iPhones.) However, the magnets are not especially strong, so if you have a thick case or a large phone, do not pick this mount.
The dash mount itself sticks securely and is adjustable. The charger that goes into your car socket has a spare USB-C port, which is handy, and there are two stick-on cable management clips in the box. I'm just not a huge fan of the permanently attached cable, which uses a proprietary cable instead of USB-C.
$37 at Amazon $60 at Scosche Photograph: Scosche A Fast Charger Scosche PowerVolt PD30 Fast Mini There are two things that elevate this above your average car socket charger. First, it has a clever small fabric tab that makes it easy to pull out and allows it to sit flush in the socket. Second, it doesn’t only support the Power Delivery (PD) standard, but it also supports Programmable Power Supply (PPS), which means it can charge all the latest phones from Samsung or Apple at top speed. The USB-C port can deliver up to 30 watts, so you can even charge a MacBook Air. If you need a cable, check our Best USB-C Cables guide for ideas.
$18 at Amazon Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Photograph: Otterbox A Dual Charger OtterBox Dual Port Car Charger Picking the right car charger obviously depends on what you need to charge, but if you have a couple of recent phones, you cannot go wrong with this one. You get two USB-C ports, one rated at 20 watts and the other at 30 watts. Both support Power Delivery and the 30-watt port also supports PPS, so you can fast-charge most phones or tablets. OtterBox offers a few different dual-port car chargers, including one with a 12-W USB-A and an 18-W USB-C ($30).
They come in black or white, with a gold highlight, and each has a textured end that’s easy to grip.
$26 at Amazon $35 at Otterbox Photograph: Anker A Triple Charger Anker 535 Car Charger If you want to charge multiple devices from your car socket, this triple charger from Anker has you covered with two USB-C ports and one USB-A. With nothing else plugged in, the first USB-C can deliver up to 67 watts, enough to charge a laptop. If you want to use the ports together, you can draw 45 watts from the first USB-C (which also supports PPS and PD) and 9 watts apiece from the other two ports. You get a 3.2-foot USB-C to USB-C charging cable with it.
$40 at Amazon Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Photograph: Amazon A Portable Battery Noco Boost Plus GB40 Jump Starter It’s always a smart idea to have a power bank in your car, and you can find a range of options in our Best Portable Chargers guide. But this one from Noco could be a roadside lifesaver because it can jump-start your car when the battery is dead. The Noco Boost Plus is a 1,000-amp, 12-volt battery pack with jump leads. It also has a USB-A port to charge your phone or other devices and a handy built-in 100-lumen LED flashlight. It’s IP65-rated and good for temperatures from –4 degrees Fahrenheit up to 122 degrees. Sling it in your trunk as part of an emergency kit, but remember to charge it at least every six months.
$100 at Amazon $100 at Walmart Photograph: Monoprice A Charging Cable Monoprice USB-C to USB-C Select Series 3.1 Gen 2 If you want to top off your phone or another mobile device in the car, you need a cable, and this affordable option from Monoprice is great. It’s a short, thick, durable cable capable of 100-W charging and 10-Gbps data transfer. The shorter lengths (1.64 or 3.28 feet) are better for the car, and Monoprice offers a lifetime warranty. If you have an iPhone or USB-A port in your car, you’ll want to choose something else from our best USB cables guide.
$12 at Amazon $10 at Monoprice Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Photograph: iOttie Honorable Mentions Other Car Accessories We Have Tested iOttie Aivo View Dash Cam for $150 : With a sleek, compact design, the iOttie Aivo View looks the part and records video at up to 1,600p and 30 frames per second. There’s a Bluetooth remote button to trigger recordings, and it supports Alexa for voice commands, but I found the app flaky and very slow to download videos.
Vantrue Element 1 Dash Cam for $150 : This dinky dashcam from Vantrue records crisp video at up to 1,440p and 30 frames per second with support for HDR. It also has a park mode and built-in Wi-Fi and GPS, but I could not get the app to connect, so I had to remove the microSD card to review the footage.
NextBase 222 Dash Cam for $90 : This basic dashcam works reasonably well and has the same design as NextBase’s more expensive models with a color screen on the back. But it can only record at 1080p and 30 frames per second, and I found it hard to read license plates at night. It does support parking mode, but there’s no GPS, so videos lack information on coordinates and speed.
NextBase 522GW Dash Cam for $300 : If your budget won’t stretch to the 622GW above, this is the next model down, and it boasts many of the same features, including parking mode, Alexa, and the optional emergency SOS subscription. Video tops out at 1,440p and 30 fps, there’s no what3words support, and nighttime performance is nowhere near as good, but this is probably your best option in this price bracket.
iOttie Velox MagSafe Wireless Charging Car Mount for $48 : Our previous pick for the best MagSafe vent mount isn't just classy but also rock solid. It’s similar to the Belkin listed above but maxes out at 7.5 watts. If you don’t mind the slower charging speed, you can save money by choosing this mount.
Joyroom MagSafe Vent Mount Charger for $33 : Here’s another wireless charging MagSafe vent mount for iPhones. It holds MagSafe iPhones and cases securely, and emits a soft blue light to make it easy to find in the dark (it turns off when you mount your iPhone). It is a solid option to have your iPhone in landscape orientation, but is not suitable for heavier Max models. Joyroom claims it charges at 15 watts, but it only charged my iPhone 14 Pro at 7.5 watts.
Joyroom Magnetic Wireless Car Charger Mount for $30 : This is similar to the Joyroom mount listed above, but it lacks feet and only works with MagSafe iPhones. It works pretty well, and you can have your iPhone in portrait or landscape orientation. You get a USB-C to USB-A cable but no charger in the box.
Mophie Dual USB-C Car Charger for $35 : This is a solid dual USB-C port charger that only misses out on a spot above because it maxes out at 40 watts. It supports Power Delivery, has a durable aluminum finish, and there’s a handy grippy texture that makes it easy to remove.
Contributor X Topics phones buying guides Shopping cars chargers Accessories and Peripherals smartphones Jaina Grey Matt Jancer Scott Gilbertson Louryn Strampe Medea Giordano Julian Chokkattu Louryn Strampe Louryn Strampe WIRED COUPONS TurboTax Service Code TurboTax coupon: Up to an extra $15 off all tax services h&r block coupon H&R Block tax software: Save 20% - no coupon needed Instacart promo code Instacart promo code: $25 Off your 1st order + free delivery Doordash Promo Code 50% Off DoorDash Promo Code + Free Delivery Finish Line Coupon Take $10 off Your Order - Finish Line Coupon Code Groupon Promo Code Groupon promo code: Extra 30% off any amount Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
118 | 2,019 | "What Makes a Good Cooler (According to Physics)? | WIRED" | "https://www.wired.com/story/what-makes-good-cooler-from-physics-perspective" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Meredith Fore Science What Makes a Good Cooler (According to Physics)? Casey Chin Save this story Save Save this story Save Our in-house Know-It-Alls answer questions about your interactions with technology.
Q: What is the best cooler, from a physics perspective? A: First, a reminder from high school physics: Heat, on an atomic level, is the motion of molecules. The quicker they move, the hotter the solid/liquid/gas is. In a hot gas, this means molecules whizzing around, bouncing off the walls. In a hot solid, molecules vibrate where they sit, passing their vibrations to any slower-vibrating neighbors through the springy molecular bonds holding the solid together.
Cold is the absence of heat, much like darkness is the absence of light. The goal of a cooler, then, is not so much to keep the "coldness" in, but to keep the heat out.
How well a cooler can do this will depend on three key factors: insulation, air, and ice.
So what's the best cooler? Think about it from a thermodynamics perspective: Generally all commercial coolers use the same method of insulation: foam between the inner and outer walls. Foam is a good insulator for two reasons. First, it is filled with gas bubbles; gases conduct heat less effectively than either liquids or solids, and trapping the gas in small bubbles prevents the gas from effectively transferring heat via convection. Second, the polymer molecules that make up the walls of the bubbles are bonded fairly loosely; this limits the rate at which heat can be transferred from one molecule to another. (Molecules in high heat-conducting materials like metal contain free-flowing electrons, which transfer heat more readily than their less mobile counterparts.) But not all foams are created equal! You’ve got your closed-cell foam and your open-cell foam. Closed-cell foam is dense and rigid, and most of the gas bubbles do not touch each other. Open-cell foam is more flexible and lighter-weight, but because most of the gas bubbles are in contact with each other, it's easier for heat to travel through it and is therefore less insulating. The two types of foam also hold different types of gas: Open-cell bubbles are often filled with water vapor, while closed-cell foams are filled with a variety of other chemicals with better insulating properties, such as pentane.
(For decades, most insulating closed-cell foams were filled with CFCs; those were phased out after their impact on the ozone layer was discovered. Most current foam gases aren't quite as good for protecting cold meats, but they are significantly better for protecting the atmosphere!) Open-cell is the type of foam used in soft-sided lunch totes and the like, while hard-sided coolers generally contain closed-cell foam in their walls. Because of their difference in insulating power, you are nearly always better off with a hard-sided cooler than a soft-sided one, as long as portability is not a limiting factor.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Lastly, and most intuitively, the effectiveness of your cooler will depend on the thickness of the foam used to insulate it. Some cheaper coolers do not have insulation in their lids, or then have thinner insulation at the bottom of the cooler; both of these factors will reduce your cooler's effectiveness. In addition, if the lid of your cooler features cup holders, consider putting a plug made of insulating material like styrofoam at the bottom of them. (Or if you don't use them, consider filling them up with spray foam or memory foam!) Compared with solids and liquids, air is a poor conductor of heat. But don't underestimate it! Every time the contents of your cooler come into contact with the warmer air outside, those faster-moving molecules get in there and poke your cold molecules in the ribs, making them move faster—and making your beer a little warmer. For this reason, it's best that your cooler isn't opened often. Consider having separate coolers for food and for drinks.
In addition, a good cooler will be airtight. Check the seal around the lid of your cooler, and make sure it lets in as little air as possible. Some coolers, such as YETI and Orca, are molded out of one piece of plastic with no seams, in a process called roto-molding. It's incredibly effective at keeping air out, but it can also be incredibly expensive.
If it's possible to fill your cooler such that there are no air pockets, do so—preferably with ice. Because while a mixture of ice and melted water in your cooler will always be 32 degrees Fahrenheit, any air in your cooler will be warmer than that. Filling those air pockets with ice gets rid of that problem.
The generally accepted ratio of ice to contents is about 2:1. (The ideal ratio has so many factors, including the specific heat of your contents, that it’s not something worth calculating before each camping trip.) The melting rate of ice is directly related to its surface area: A block of ice melts much slower than an equivalent amount of ice in small cubes. Consider buying block ice, or freeze water in a plastic gallon jug to make your own "block.” (After this story was first published, physics teacher Fred Bucheit wrote in to remind us that melting the ice is actually desirable, since that's how the ice absorbs the heat that would otherwise warm your beer. If you want your contents at maximum chill, fast-melting cubes are your friend; if you have a long trip where your contents don't have to be freezing, block ice will last much longer.) As the ice melts, the water level in your cooler will increase. Don't drain it! It’s still insulating your cooler's contents. (Water may conduct heat better than air, but as long as the water is colder than the air, it's the better choice to keep your beer in.) Fast-moving molecules are the enemy of cold things, and keeping these molecules at bay is the valiant ambition of all coolers—the best ones take full advantage of the laws of physics to maximize their power.
Meredith Fore likes her hard cider ice-cold and writes for WIRED about physics as an AAAS Mass Media Fellow.
Updated 7-11-19, 9 pm EDT, to clarify how melting ice can actually help keep your drinks cold.
What can we tell you? No, really, what do you want one of our in-house experts to tell you? Post your question in the comments or email the Know-It-Alls.
A device to detect “aggression” in schools often misfires Disney's new Lion King is the VR-fueled future of cinema Google Photos hacks to tame your picture overload It's time to switch to a privacy browser YouTube's “shitty robot” queen made a Tesla pickup truck 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones 📩 Hungry for even more deep dives on your next favorite topic? Sign up for the Backchannel newsletter Topics The Know-It-Alls thermodynamics Matt Simon Ramin Skibba Matt Simon Amit Katwala Grace Browne Ramin Skibba Jim Robbins Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
119 | 2,017 | "How Self-Driving Cars Will Solve the Ethical Trolley Problem | WIRED" | "https://www.wired.com/2017/03/make-us-safer-robocars-will-sometimes-kill" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Transportation To Make Us All Safer, Robocars Will Sometimes Have to Kill Save this story Save Save this story Save Editor's note: This is the second entry in our new series Is That a Thing*, in which we explore tech's biggest myths, misconceptions, and—every so often—actual truths. Watch the first episode, about cellphones and cancer, here.
* Let’s say you’re driving down Main Street and your brakes give out. As the terror hits, a gaggle of children spills out into the road. Do you A) swerve into Keith’s Frozen Yogurt Emporium, killing yourself, covering your car in toppings, and sparing the kids or B) assume they’re the Children of the Corn and just power through, killing them and saving your own life? Any decent human would choose the former, of course, because even murderous kiddie farmers have rights.
But would a self-driving car make the right choice? Maybe yes. But even if it does, by programming a machine to save children, you're also programming it to kill the driver. This is known as the trolley problem (it’s older than self-driving cars, you see), and it illustrates a strange truth: Not only will robocars fail to completely eliminate traffic deaths, but on very, very rare occasions, they’ll be choosing who to sacrifice —all to make the roads of tomorrow a far safer place.
Cut your pearl-clutching: Self-driving cars will save countless lives. Humanity needs them, badly—more 30,000 people die every year in road accidents in the United States alone. Worldwide, it's more than a million. Because, it turns out, humans are terrible drivers. Machines, by contrast, are consistent, calculating, and incapable of getting drunk, angry, or distracted.
Google’s Robocar Lawsuit Could Kill Uber’s Future and Send Execs to Prison A Fascinating Glimpse at How We’ll All Carpool in 2027 Meet the Self-Driving Car Built for Human-Free Racing But autonomy can’t save everyone—the technology will never be perfect—and society must understand that very well before the technology arrives. Society also needs to understand that robocars are for the greater good. “Convincing the public must begin with understanding what the public is worried about and what the psychological mechanisms involved are,” says Iyad Rahwan of the MIT Media Lab, who’s studying just that.
In our little thought experiment with the frozen yogurt, most people would choose to sacrifice their own life for the good of the crowd. But Rahwan has found most people wouldn’t buy a self-driving car that could make the decision to kill them as the passenger. That’s silly and irrational, sure—this would be an exceedingly rare situation and overall you are far safer in the hands of a machine than driving yourself—but this finding poses a serious problem: Robocars may soon be ready to hit the road, but humans aren’t ready to accept the ethical challenges that come along with them.
But, in fairness, these are early days in the self-driving revolution. Researchers need to gather more data about public perception, and automakers in turn need to be open with their customers. “I think everybody is learning,” says Rahwan. “The public is learning, the regulators are learning, and the carmakers are learning as well.” Meaning for the time being, Keith’s Frozen Yogurt Emporium is safe from the merciless robocars.
For the time being.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Staff Writer X Topics Self-Driving Cars Tammy Rabideau Rhett Allain Steven Levy Will Knight Matt Kamen Louryn Strampe Matt Simon Brendan Nystedt Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
120 | 2,023 | "The Battle for the Soul of Buy Nothing | WIRED" | "https://www.wired.com/story/the-battle-for-buy-nothing" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Vauhini Vara Backchannel The Battle for the Soul of Buy Nothing Photograph: Holly Andres Save this story Save Save this story Save When my son was little, my mom started collecting his outgrown clothes to give to strangers on the internet. She would meet these people through Buy Nothing, a project that had been created by two women from Bainbridge Island, Washington, not far from her home in Seattle.
The mission of Buy Nothing, which had a local cult following, was to revive old-fashioned sharing among neighbors. People were organized by town or neighborhood into Facebook groups, where they could post what they needed, or no longer needed, and their neighbors would respond accordingly.
What made this different from Goodwill, Craigslist , or other freebie groups was that the people in your group always lived close by, and—because Buy Nothing was hosted on Facebook—everyone’s names and photos were visible, and messaging other members was as easy as texting. Pickups tended to happen at the front door, prompting face-to-face conversation. After a while, strangers became friendly acquaintances, their stoops integrated into your mental map of your town. Through my mom, random people came to own the forgotten detritus of my motherhood: unused diapers, a nursing cover (“that you threw in bathroom trash,” my mom accused in an email). My mom had been living frugally and sustainably long before it was fashionable—diluting her dish soap, cutting her sponges into quarters—and on Buy Nothing, she’d found her people.
When my son was 6, my mom retired. She packed her life into used cardboard boxes procured on Buy Nothing and moved down the street from me in Fort Collins, Colorado, where she joined a new Buy Nothing group. With her freed-up time, she acquired empty kombucha bottles on Buy Nothing, filled them with home-brewed kombucha, then regifted those. I used the group by proxy—once, to get rid of a box of half-full toiletries, another time to find a clip-on leopard tail for my son’s summer theater production—and eventually joined it myself.
Our group, one of several in Fort Collins, included more than 1,000 members. Buy Nothing had grown a lot in the years since my mom had been an early adopter, especially during the worst of the pandemic, when people were avoiding stores. By summer 2022, there were thousands of groups in more than 60 countries, with about 6 million members. The founders, Liesl Clark and Rebecca Rockefeller, had published a book about buying less in which they described a grand vision of strengthening individuals, communities, and the environment. People told apocryphal stories about diehards who never bought anything, like, ever.
Facebook was a big part of what made Buy Nothing so effective. But it was also the reason I was far less active there than my mom. Like a lot of people I knew, I’d fallen off using Facebook much. Given Buy Nothing’s mission of commerce-free community building, there seemed something dissonant to me about its existence on a platform that mined people’s personal information and stoked invidious “engagement” for ad dollars.
It turned out that Clark and Rockefeller, the Buy Nothing founders, also considered Facebook an uncomfortable fit. When I talked to them both on a Zoom call last summer, Rockefeller, 53, was on her parents’ porch in glasses, a delicate blouse, and a shaggy silverish bob, while Clark, 56, sat at her dining table wearing a ponytail and a fuzzy cardigan. “We used Facebook because it was a free tool, and it had a lot of reach. There were a lot of reasons that we picked it,” Rockefeller explained. “But we realized very early on that it also came with some things that conflicted with our mission.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Rebecca Rockefeller and Liesl Clark founded Buy Nothing in 2013.
Photograph: Holly Andres She and Clark had a wearied, beleaguered air. A year earlier they had decided to move Buy Nothing away from Facebook, turning their attention to launching a stand-alone Buy Nothing app. This kind of undertaking was, of course, one of those many things in life that do not come free. They registered a business, The Buy Nothing Project Inc., and pitched venture capitalists on investing in them. Clark had taken to punctuating her tweets with hashtags such as #futureofwork and #MakerEconomy.
So far, though, Buy Nothing Inc. was a flop. Even more upsetting, Clark and Rockefeller were getting blasted from within their own community. Some Buy Nothing members accused them, in blistering Facebook comments, of selling out. This reaction might have been expected, in retrospect, from a commerce-free collective, but the intensity of it shook Rockefeller and Clark. They had built a thriving and generous community on the most corporate of internet platforms. But now that they were trying to become independent—a move that they saw as committing further to their principles—they were met with furious disbelief that the founders of a movement premised on strings-free gifting now appeared to be trying to make a buck. “You have to fund it. There’s no shame in that,” Clark said. “But we are shamed nonstop for having named it the Buy Nothing Project.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Buy Nothing’s much-repeated origin tale starts with Clark, a documentarian from Bainbridge Island, spending time in a remote mountain community in Nepal with her husband, the elite mountaineer Pete Athans. There she noticed that people reused their belongings and shared, rather than bought, what they needed. Back home, Clark and Rockefeller, a friend, would often take walks with their children along the water and inventory the trash that had washed ashore. They wondered whether they could reduce waste by bringing the sort of gifting Clark had seen in Nepal into their own town, and Buy Nothing was born.
None of this is exactly inaccurate. Clark is a filmmaker; she did observe gift economies in Nepal; she and Rockefeller did audit the Bainbridge shoreline. But the full story of Buy Nothing starts when they met, in 2009, through an online gifting forum called Freecycle.
This article appears in the April 2023 issue.
Subscribe to WIRED.
Photograph: Andria Lo Earlier that year, Rockefeller had gotten divorced and ended up as a single mother. While married she had been working-class, but suddenly she was poor, living on food stamps and Medicaid. She joined Freecycle expecting to take things she needed while simultaneously giving back. She kept getting in trouble with the group’s local moderator, though, for offerings that he deemed unacceptable. “I had these twigs that I’d pruned,” she told me. “The guy was like, ‘Your old shrubbery is not a gift.’” He was wrong. The twigs did attract interest—from Clark, it turned out. When she came by to pick them up, the women commiserated over Freecycle’s strict rules and found they had a lot in common.
Both women had unconventional lives. Clark’s academic parents raised their kids partly in Nigeria and Chile and spent their spare time on DIY projects. At one point they bought land in New Hampshire, and the whole family built a house on it by hand. Later, her work as a documentarian took her all over the world, with her children often tagging along. When Rockefeller was 3 years old, meanwhile, her mother joined a cult and left the family. Rockefeller’s father remarried, and he and Rockefeller’s adoptive mother, both government workers, instilled the family with a strong ethic of public service. As she grew up, an iconoclastic streak kept Rockefeller from settling into one particular career; she worked as a kayak guide and a craftsperson, among other gigs.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Both women homeschooled their children—for Clark, to accommodate work and volunteerism, and for Rockefeller, to provide a more personalized education for her daughter, who is on the autism spectrum—and they started getting together for school projects. They found they shared a mutual devotion to environmentalism and frugal living. Whenever they saw each other, they’d come up with ideas for idealistic ventures: a local bartering club, a lending library for household tools. None ever took off.
In July 2013, Rockefeller posted on Facebook, “If I started a local free/trade/borrow listserve, like Freecycle but with a different attitude re: moderation of posts, would you join?” There was a chorus of positive responses— yes! , yup! , prolly.
Clark jumped in: “But how can each member post? Do you submit to a moderator who then posts your item for you? Do you have to have a photo?” Rockefeller replied, and in the thread—then later, in person—the women hashed out the details.
The initial premise was to make people feel good about whatever they had to offer. “Literally, we want people to come in and offer their onion skins and their chunks of concrete,” Rockefeller told me. And unlike Freecycle, which focuses on giving and dissuades requests, they would encourage people to ask for anything. But maybe more consequential than any of those differences in sensibility was that Rockefeller and Clark decided to host Buy Nothing on Facebook, with its built-in social tools.
On July 6, Rockefeller created a Facebook group called Buy Nothing Bainbridge and added Clark as a co-administrator. By the end of the day it had more than 100 members. Within weeks the group had added hundreds more members, and strangers in nearby towns were asking how they could start their own. Rockefeller and Clark helped them, and by the end of December they had created 78 Buy Nothing groups, with more than 12,000 members in all.
Recent offers have included a used stick of upmarket deodorant, a half-eaten artichoke pizza, and the fluff from inside a couch.
The day before New Year’s Eve, Clark, Rockefeller, and a group of friends and Buy Nothing members got together to plan for the future. They had tea and muffins, then did an exercise. On multicolored index cards, they each wrote down their wildest dreams for Buy Nothing. One woman hoped that it would become a nonprofit and publish a magazine; another imagined it would spawn a virtual currency.
The group made a list of Buy Nothing’s positives ( dedicated admins, free, connects virtual world to the real world ) and negatives ( 24/7 time commitment, funding, problems managing Facebook ). They wrote down the opportunities ahead, and also the risks. In the latter column they listed the challenge of replicating their original vision across dozens of groups, the limitations of the Facebook platform, the chance of egos getting in the way of the group’s principles, and the possibility of being “unable to fund core expenses.” Years later, the list would turn out to be prescient. But at that time, almost a decade ago, all the excitement made Rockefeller and Clark feel like anything was possible.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Photograph: Holly Andres Test the limits of what can be gotten or discarded on Buy Nothing, and you will be confounded. You can proffer a medium-size rock, and someone will want it for their garden. You can post dryer lint, and a neighbor will convert it into hamster bedding. In their book, Rockefeller and Clark write about a childless couple who, after multiple miscarriages, finally gave away their unused baby items. The recipient, collecting this on behalf of a pregnant friend, mentioned that the friend was thinking about putting her child up for adoption. One thing led to another, and soon the couple became the infant’s adoptive parents.
This was a particularly unusual case, but over the months I spent talking to Buy Nothing members, it wasn’t even the wildest anecdote I heard. In my group in Fort Collins, recent offers have included a used stick of upmarket deodorant, a half-eaten artichoke pizza, and the fluff from inside a couch. All found new life. The couch fluff, actually, went to at least three people—one of whom, a friend of mine, was sewing tiny stuffed gnomes as Christmas presents.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A woman in Seattle named Katylin (she doesn’t use a last name) told me that Buy Nothing has allowed her to live well in one of the most expensive cities in the world. Katylin describes herself as blue-collar; she’s had various jobs including practicing cosmetology and working at a grocery store. Seattle has gotten wealthier and more economically stratified over the years, but on Buy Nothing, she told me, relations feel equalized.
Katylin has given away chicken droppings (for fertilizer), stale aquarium water (a nutrient-rich plant food), and crushed egg shells (a natural calcium source). She has received a stove, a dishwasher, toys for her children, concert tickets, and a wooden boat, which she rows out onto the lake at night to stargaze.
For two years during the pandemic, Katylin told me, she bought almost nothing except food. “I feel great after a day of Buy Nothing,” she said. “You don’t go to a Walmart, come home, and feel happy about your purchases.” Rockefeller and Clark decided early on that they didn’t want to codify Buy Nothing’s principles into a business or a nonprofit, with all the unwieldy administration that would entail. They did, however, want to supervise how the Buy Nothing groups functioned, so they built a makeshift management structure using the tools already embedded in Facebook. On Facebook, groups have to be operated by one or more administrators, so Rockefeller and Clark decided to have local volunteers run each group. They disseminated information to these people through another Facebook group called the Admin Hub. They appointed regional admins to oversee the local ones, and finally a small circle of about 20 global admins to handle project-wide tasks and weigh in on big decisions. Rockefeller and Clark had the final word.
Almost all of the admins were women, and their labor was entirely volunteer. As Rockefeller and Clark sank their lives into Buy Nothing, sometimes at the expense of their families and careers, so too did thousands of others. Local administrators said they spent seven or eight hours a week, and in some cases as many as 40, reviewing requests to join their groups, making sure their communities felt welcoming, and keeping the giving spirit active by, for example, posting messages of gratitude.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Another part of an admin’s job was to enforce the 10 rules of Buy Nothing. One core rule concerned each group’s borders, which were limited to small geographical zones. The idea was that this would foster a more intimate community and reduce a group’s carbon footprint. A member could belong to only the group where they lived, and once a group reached 1,000 people, it was supposed to split into smaller communities, a process called “sprouting.” Rockefeller and Clark imagined Buy Nothing sprouting into groups covering ever-smaller geographies until, eventually, so many people were on Buy Nothing that it would be rendered obsolete. “We know our immediate neighbors so well that we can just walk over there and say, ‘Hey,’” Clark said.
It was a romantic vision for what the internet could facilitate. But as Buy Nothing expanded, people started to chafe against this stricture and others. While Rockefeller and Clark regularly received notes of gratitude, they also got messages of irritation, and even hate mail, that blamed them for mishaps and infighting in the local groups or accused them of heavy-handedness with all the rules.
In 2018, some of these localized complaints started to bubble up to the movement’s surface. When a Buy Nothing group in Boston’s Jamaica Plain neighborhood was approaching 5,000 people and still hadn’t subdivided, regional admins started pushing for a sprout, a local admin at the time told me. (Regional admins couldn’t be reached for comment.) She said that when the sprout was announced to the group, members were furious: They protested that they didn’t want to split up, and they worried a sprout might fall along racial and socioeconomic lines and reinforce the legacy of segregation and redlining.
According to the admin and other members I spoke to, the regional admins doubled down, as did members, and the language got heated. “Our community gets really fired up on the internet,” the admin said. “It was rocky.” Then Clark got involved, writing in a regional group for admins that she was “saddened” by the Jamaica Plain community’s uncivil behavior. At this, the local admins quit in protest, and the remaining members revolted completely.
Members of the group discovered a YouTube video Clark had filmed during a Himalayan expedition co-led by Athans, her husband, with support from the Nepalese government. The video shows Athans in climbing gear, handling an ancient human skull while suspended in front of a cave. In voiceover, Clark explains reverently, “We’ve uncovered a people who persevered, their story of good health recorded in their bones.” She describes present-day villagers who, when Clark and her family brought gifts of clothing, insisted that the items be divided equally among the households, “so each family would have equal social capital to share.” She goes on: “We wondered, could we start an egalitarian gift economy in our own town?” The video cuts to Bainbridge Island.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Former members told me that the video was roasted for having colonialist undertones. One member, Kai Haskins, wrote a Medium post about the conflict titled, “That ‘Hyper-Local’ Buy Nothing Group You Love Is Controlled by a Wealthy White Woman in Washington State and Is Reinforcing Systemic Racism and Segregation.” Clark took issue with Haskins’s account; for one thing, she said, she’s not wealthy. Still, she eventually apologized in a post to the Jamaica Plain group. “I agree that it is important for all of us, and white people in particular, to talk about racism without becoming defensive. I clearly have been, and I’m learning from my own fragility,” she wrote. By that time, though, everyone was fed up. The Jamaica Plain group fell apart, with thousands of members defecting and starting a separate group.
One way to approach the episode might have been to see it as an inevitable, if uncomfortable, outgrowth of a movement that encouraged people to feel communal ownership of their local gift economies. If it ended with members in Jamaica Plain starting a rival gifting group, so what? That was not, however, how Rockefeller and Clark responded. They worried that the upset in Jamaica Plain, and other episodes like it, represented a bigger problem, and in late 2019 they formed an “equity team” to figure out how to create an “actively anti-racist and anti-oppression culture” within Buy Nothing.
“I feel great after a day of Buy Nothing. You don’t go to a Walmart, come home, and feel happy about your purchases.” Katherine Valenzuela Parsons, a member of the equity team, told me that the team discovered people in other groups had also experienced a racialized dimension to sprouting. And Buy Nothing’s problems went further still. Some local admins were letting people offer Confederate flags. In several instances, when people of color complained about this and other racist or offensive posts, they’d been accused of incivility and thrown out of their groups. In other cases, members attacked admins of color for raising these issues.
Rockefeller and Clark had known about some of this, but the scope startled them. On the one hand, the Jamaica Plain experience had made them feel that high-level admins, themselves included, might have overstepped. On the other hand, they didn’t want the Buy Nothing experience to be so unsupervised that toxicity and racism would go unchecked and local admins would abuse their power.
They also felt that Facebook incentivized provocative, even hostile, communication. “Even if your motivations are purely lovely and welcoming and inclusive, you’re basically putting yourself in the meat grinder of social media, and you will be eaten up,” Rockefeller said. The equity team hadn’t highlighted Facebook itself as a problem, but Rockefeller and Clark started to wonder whether it all couldn’t be solved by going off the platform entirely.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The two of them had harbored vague desires since the beginning of Buy Nothing to divest themselves of Facebook, but they had never figured out how to do it. One option was to turn Buy Nothing into an independent nonprofit. But Rockefeller, who has spent much of her adult life volunteering and working in nonprofits, dreaded the cycle of fundraising and subsequent obligation to meet funders’ demands. It also seemed weird to start a business based on giving stuff away for free. Now, they came up with a plan. They’d collect donations from Buy Nothing members to build a platform independent of Big Tech. On Black Friday of 2019—celebrated in their community as Buy Nothing Day—Rockefeller and Clark posted an announcement on Buy Nothing’s main Facebook page: They were building an app called SOOP, for Share On Our Platform. “Because we want to answer only to the public good and not to platform owners who will profit from the use of personal data,” they wrote, “we are raising the funds to do this on our own.” The response was mixed at best. Some community members found it wildly hypocritical that the founders were asking for money. It was a fair point: Rockefeller and Clark’s own rules for local groups banned “requests or offers for monetary assistance, including requests for loans, cash, or donations.” Optics-wise, it didn’t help that Rockefeller and Clark had started plugging their forthcoming book, The Buy Nothing, Get Everything Plan , on Buy Nothing’s Facebook page. A few members did donate, but the total—just $20,000—wasn’t enough for even the most basic proof of concept. Humbled, Rockefeller and Clark returned the money and tabled the idea.
Their book came out a few months later. The tone was part Marie Kondo , part manifesto. “Money isn’t all that wonderful,” Clark and Rockefeller wrote, adding, “The market economy begets isolation, and money disconnects us from one another.” Those who worried that the book would make the authors rich needn’t have wasted their energy—it was published just as the pandemic arrived, and barely sold.
The pandemic propelled Buy Nothing into mainstream popularity. With people hunkering down in their neighborhoods, membership started growing faster than ever, to about 1.5 million users in July 2020; over the following year, the project would add nearly 3 million more. People shared groceries, homemade masks, over-the-counter medication. It was exhilarating but also, for Rockefeller and Clark, exhausting; suddenly they were working nine-hour days on top of everything else.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Meanwhile, they’d been changing Buy Nothing’s operations, partly in light of the equity team’s findings. They started getting rid of regional and global admins, a move meant to return control to local groups and streamline communication. They published self-serve materials on their website so that people could launch new groups on their own. They also loosened Buy Nothing’s rules to let groups determine their own geographical boundaries, decide when to sprout, and allow members to belong to more than one group.
Not everyone appreciated the changes. Haskins, one of Buy Nothing’s more vocal critics in Jamaica Plain, said they came across as “performative bullshit.” Parsons, the equity team member, told me that while she came around to them, they went much further than anything she and the equity team had suggested.
Other admins felt the founders had broken Buy Nothing’s intimate feel and community-led support systems. And they objected to the top-down direction of these changes. One of them, Andrea Schwalb, took to the Admin Hub to denounce the project’s new direction, and said she was kicked out. She started a separate Facebook group, called Gifting With Integrity—OG Buy Nothing Support Group, for Buy Nothing admins who preferred the old organizational structure and rules. Schwalb and others were already prickly about how Rockefeller and Clark publicized their book; all the changes, she said, made matters worse. “We were big mad.” Photograph: Holly Andres Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Clark and Rockefeller saw their modifications as necessary, if controversial, improvements. They were making the organization less bureaucratic and more equitable; those who disagreed were resisting change. And it was hard for them to feel generous toward their most strident critics.
By this point, Clark had stopped making documentaries and was working on Buy Nothing full-time. Rockefeller had, in Buy Nothing’s early years, taken a job at an organization that assists people with disabilities and eventually became its executive director. As Buy Nothing took up more of her time, however, she stepped into a part-time position as an administrative assistant that paid little more than minimum wage. “I’m basically living on the edge of poverty so that I can serve this thing that I helped create,” she told me. She acknowledged she’d done this by choice. Still, she added, “Sometimes it feels like, ‘Oh, this is absolute insanity, it makes no sense.’” She and Clark started dreaming of paying themselves and others for their Buy Nothing labor; it seemed only right. Their crowdfunding efforts had backfired. Now they wondered whether it wasn’t such a bad idea to turn Buy Nothing more straightforwardly into a business.
In January 2021, Clark received a LinkedIn message from Tunji Williams, a former attorney turned entrepreneur who had previously built a small startup. “I just learned about your amazing movement,” he wrote, and offered to collaborate with them. They invited him to meet over Zoom, where Williams explained that the birth of his first child had inspired an idea for an app to share secondhand baby paraphernalia and other items. Friends told him about Buy Nothing, and he thought he’d approach them about launching a startup together.
Clark and Rockefeller accepted. Going into business with someone who happened to email at the right moment may not have been the savviest decision, but the way they saw it, their cards were finally lining up. Williams came across as genuine and experienced, and, if they were being honest, they needed help. On January 13, they registered The Buy Nothing Project Inc. as a benefit corporation—a for-profit business obligated to prioritize society, workers, the community, and the environment—in Delaware. This time they took a more conventional approach to fundraising, collecting $100,000 from family and friends. The company had four cofounders: Clark, Rockefeller, Williams, and a software developer named Lucas Rix who, as it happened, had also sent a blind email to Clark and Rockefeller. Clark would be the CEO, Williams the COO, Rockefeller the head of community, and Rix the head of product. For the first time in months, Rockefeller and Clark felt energized. “It was a huge relief,” Rockefeller told me.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Three weeks after registering The Buy Nothing Project Inc., Clark announced in the Admin Hub that they were building an app “to host the Buy Nothing movement as it continues to grow.” The founders would now dedicate their time to this new endeavor. As a gesture of gratitude, she added, they would give a stake in the platform to admins who joined the waitlist for the app. “Your enthusiastic participation will help us reach critical mass more quickly,” she wrote.
The reaction was not particularly enthusiastic. Some people did cheer the founders on and sign up for the waitlist—but others were upset. The app had no admin roles at all. Several admins told me that although they didn’t begrudge Rockefeller and Clark their entrepreneurial turn, they couldn’t help but view the app as competition with the existing communities that they’d painstakingly built over years. “There was a time when I was spending 30 hours a week doing things for Buy Nothing,” Kristi Fisher, an admin in California, told me. “There was this feeling of, like, nobody asked us or took our thoughts and feelings into consideration.” Others turned their ire directly on the founders, harshly criticizing them for capitalizing on the work of thousands of volunteers and then shilling their product in that very same space. Rockefeller and Clark felt personally attacked. As they pushed on with what they saw as an attempt to give the Buy Nothing community a healthier existence online, it seemed possible that in the process they might lose the community entirely.
In November 2021, the Buy Nothing app launched. It was immediately clear how different it was from the Facebook groups. You didn’t have to be approved for admission, for one. You could set any address as your home base and search for items within a larger radius: maybe one mile away, maybe 20.
But some core features of the Buy Nothing culture had been lost. You could no longer click on a person and see where they worked or whether you had friends in common. On Facebook, Buy Nothing posts had appeared in your feed spontaneously, encouraging off-the-cuff interactions, but using the app required remembering to open it in the first place. All this added up to making the posts feel less intimate and more transactional. Some people told me that, on the app, Buy Nothing resembled the depersonalized services against which it had originally defined itself.
The two of them had harbored vague desires since the beginning of Buy Nothing to divest themselves of Facebook.
The launch of the app intensified the feud between the Buy Nothing founders and their internal critics. Rockefeller and Clark almost fully reoriented the Buy Nothing website around the app; at one point, information about the Facebook groups was tucked under a snarky message: “Want Facebook to profit from your Buy Nothing experience? We’ve got you covered!” Schwalb, meanwhile, developed her OG group into a sort of alternate universe in which nothing about Buy Nothing had changed. She shared Buy Nothing documents that the founders considered obsolete, coached admins on how to operate under the old rules, and, through friends who still belonged to the Admin Hub, generally kept tabs on what Buy Nothing was up to.
In the weeks after launch, thousands of people tried out the app. By the end of the year, 174,000 people worldwide had downloaded it; of those, about 97,000 were using it once a month or more. As time passed, though, the numbers stalled. In the App Store, one-star ratings dominated. By April 2022, monthly users had fallen to 75,000.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The discontent among Buy Nothing Facebook admins explained some of this; they were hardly going to evangelize for an app they resented. But the far more significant problem was that the app just wasn’t very good. It was so basic and bug-ridden that, at first, people couldn’t even figure out how to register. To limit expenses, Clark and Rockefeller had contracted a web-development shop in Poland to make a simple version. They eventually raised another $400,000, but that was still short of what they needed.
The truth was that turning Buy Nothing into a business had come with far more expenses than revenues. If Facebook profited from Buy Nothing members’ activities, it also covered many of their costs. With the launch of the app, the resources that came for free with Facebook—software development, computing power, visibility—were suddenly Clark and Rockefeller’s responsibility.
It was logical that offsetting those costs, and eventually turning a profit, required bringing in revenue, but whenever I asked Clark and Rockefeller about this, they sounded genuinely perplexed. They had vowed not to sell their members’ personal data or run targeted advertisements, thus ruling out some of the most obvious business models. And their ideas for moneymaking enterprises that wouldn’t sacrifice their ideals struck me as convoluted: They considered collecting generalized information about what items people were sharing, then selling that to local municipalities tracking waste; they thought of pushing public-service announcements about reuse that users would pay to turn off. Their most straightforward idea was to incorporate a Taskrabbit -like function, allowing users to charge one another for add-on services such as delivering gifts or repairing broken items, with Buy Nothing taking a cut. But then that, of course, would involve buying something.
They were at an impasse, and funding was running out. So, in May of last year, Clark did what any self-respecting entrepreneur in her position would do: She started writing to venture capitalists and angel investors. In the months that followed, she sent messages to 163 investors. She got 17 meetings—and no funding.
Clark blamed the difficult environment at the time for fundraising. Rockefeller agreed, though she couldn’t help but suspect something else: “We’re two middle-aged women trying to raise money, and we have been a women-led movement from the beginning. They look at us, and they’re like, ‘Well, you haven’t run a multimillion-dollar company, so why should I give you any money?’” She bristled at that perception: “We took nothing, and we turned it into a movement that now literally millions of people participate in every day. Come on. That didn’t happen by mistake.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Still, no funding materialized. Nor, as time went on, did the users. I spoke to dozens of Buy Nothing members while reporting on this article, and the vast majority had either barely heard of the app or had tried it once or twice before abandoning it. By June of last year, Rockefeller and Clark quietly stopped developing the app. By winter, they were scraping the bottom of the Buy Nothing bank account.
Clark planned to cover the company’s costs, around $5,000 a month, as long as she needed to. But she and Rockefeller both sounded more disheartened than ever. Once, as we began a Zoom call, I could hear an incessant pinging in the background. Clark explained that she had set up notifications for support requests through the app. It turned out she and Rockefeller were mostly responding to the requests themselves.
Photograph: Holly Andres At the one-year anniversary of its launch, the Buy Nothing app had been downloaded 600,000 times, but only 91,000 people were regularly using it, not many more than at the beginning. Meanwhile, the Facebook groups from which the founders had disengaged were thriving without them. Global membership had surpassed 7 million. When I asked what Rockefeller and Clark thought would happen to Buy Nothing Inc. if they couldn’t come up with additional funding, they said they weren’t interested in thinking in such fatalistic terms. But when I posed the same question to Williams, the COO, he said he’d considered it. “We’re adults,” he said. “We’ve got to shut it down.” Rockefeller and Clark hadn’t given up, though. They decided to switch tactics yet again. Over Thanksgiving weekend, they changed the Buy Nothing website so that when someone showed up looking for information about starting a Facebook group, they were directed to fill out a form that would automatically be sent to Rockefeller and Clark. The form asked people whether they had tried the app, offering a download link. If, after trying it, they still wanted to start a Facebook group, Rockefeller or Clark would build the group for them.
Rockefeller and Clark may have realized that if they couldn’t compete with Facebook, they would do better to take control of what they’d started. A couple of days after Christmas, Schwalb opened up Facebook to find that her OG group had vanished. Months earlier, Buy Nothing Inc. had secured trademarks on the phrases “Buy Nothing” and “Buy Nothing Project” and reported the OG group to Facebook for trademark infringement.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Clark and Rockefeller told me that while they wanted to give local admins flexibility in running their groups, Gifting With Integrity had crossed a line. The group was aggressively promoting an approach that the founders had discarded; it had combined the Buy Nothing brand with the Gifting With Integrity name; it was disseminating old documents without what the founders considered proper attribution. “I don’t get to say ‘I’m making shoes, and they’re called Nike , and they have the swoosh on them, and you should buy my Nikes,’” Rockefeller told me. To Schwalb and her co-admins, this was a stretch. For one thing, Gifting With Integrity wasn’t asking people to buy anything.
In January, Rockefeller and Clark posted a message to the Admin Hub, elaborating on their stance. They were just trying to protect their trademark, they said. To that end, they were asking that all Facebook groups link to a Buy Nothing web page describing the project. Rockefeller and Clark told me that they required this so that admins wouldn’t have to make manual updates whenever details changed. But Schwalb noticed that the web page, conveniently, promoted the Buy Nothing app.
To get back on Facebook without reprisal, the OG group changed its name to, simply, Gifting With Integrity—OG Admin Support Group, removing the part about Buy Nothing. They encouraged local gifting groups to change their names as well. Their website reads, “We are not affiliated with, nor do we support in any fashion, the Buy Nothing Project.” On Facebook, the Gifting With Integrity group has 1,500 members, all overseeing local groups.
“The Buy Nothing name—that’s a challenge. Because it’s like, OK, nothing is being bought, how are you going to monetize the platform?” My own Buy Nothing group, in Fort Collins, was one of those that followed Gifting With Integrity’s lead. It’s now called the Northeast Fort Collins Gifting Community. A friend shared with me a message sent to the group by an admin announcing the change: “We truly believe in building our little hyperlocal community and plan to continue to operate by the original principles that make this group great. We don’t want that to disappear into the machinery of the new monetized system.” When I asked Schwalb how many local groups had discarded the Buy Nothing name and adopted Gifting With Integrity’s approach, she replied, “We’re not keeping numbers, and we most definitely don’t intend to, because I don’t want to turn into the Buy Nothing conglomerate.” In some ways, Rockefeller and Clark’s loss of control made me think of women inventors who hadn’t gotten credit for their products: Rosalind Franklin, the scientist who helped discover the double helix; Lizzie Magie, the gamemaker who invented Monopoly. But then, Rockefeller and Clark had started Buy Nothing as a counteragent to the capitalist ethic that concentrates wealth and power in the hands of the few while ruining lives, communities, and the environment. The project had been a success, owing to their efforts, certainly, and also to those of the thousands of volunteers who made Buy Nothing their own. If the movement ended up splintering into an unaccountable mess of local variations—and Rockefeller and Clark didn’t make a cent in the process—maybe that was the most fitting ending possible.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Photograph: Holly Andres I had all but written off their chances of survival when, in late January, I heard from Rockefeller and Clark again. Recently, with things getting desperate, Clark had looked back through her email to see whether there were any connections she’d missed. Scrolling, she hit upon a year-old email from a former Intuit executive named Hugh Molotsi. Molotsi had launched his own startup, Ujama, that helped parents coordinate childcare with one another via an app, but it didn’t have many users. Molotsi had written to see whether Rockefeller and Clark wanted to use his technology, but since they were building their own app at the time, they’d said no.
Now Clark did some research and realized Molotsi’s app was much better than anything they’d built. She’d also learned, from her conversion to entrepreneurship, how important it was to network. She got in touch with Molotsi and, after a couple of calls, made a proposition to merge the companies under Buy Nothing’s name. Molotsi would join the company as chief technology officer and rework the Buy Nothing app. “He needs community, we need tech,” Clark explained.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Molotsi agreed; the deal is pending. As part of the transition, Williams stepped down as COO, though he remains on the Buy Nothing board. Molotsi also introduced Buy Nothing’s founders to their first funder in a long time: an angel investor named Paul English, known for cofounding the travel website Kayak. English put in $100,000 and introduced Clark and Rockefeller to a number of VCs and angel investors. So far, Clark told me, the response to their pitches has been much warmer than before, though no one has committed to investing. Visits to the app are up, too: Monthly users recently surpassed 100,000.
When I spoke to Molotsi over Zoom, he said he feels the company needs to do a better job explaining to investors how it can make money: “The Buy Nothing name—that’s a challenge, because it’s like, OK, nothing is being bought, how are you going to monetize the platform?” I asked how that question might be answered. “There are lots of things happening around gift-gifting that I believe are monetizable,” he said. “So, for example, if you have a couch you’re trying to get rid of, and I want your couch, but you don’t have a truck, and I don’t have a truck, that presents a problem: How are we going to make this happen?” He was talking, I realized, about the delivery service Rockefeller and Clark had floated months earlier.
One of the last times I spoke to the founders, I remarked that these recent developments looked good for them. Clark responded that she still feels like they’re at a low point. Her schedule had become punishing: She’d been waking up between 4 and 5 am to work on Buy Nothing, and not stopping until she went to bed. It struck me as a big departure from the all-volunteer camaraderie of Buy Nothing’s early years. But Clark is as certain as ever that she and Rockefeller are on the right path in their decade-long quest to get people to buy less. “Rebecca and I are just two creatives. This was just never where we thought we would head,” she said. “But now it makes sense, because we want to build a bigger, better world.” Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Wired contributing writer X Topics longreads Facebook free Startups women controversy magazine-31.04 Andy Greenberg Brandi Collins-Dexter Angela Watercutter Steven Levy Lauren Smiley Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
121 | 2,022 | "How to Sell Your Old Smartwatch or Fitness Tracker (2023) | WIRED" | "https://www.wired.com/story/how-to-sell-smartwatch-fitness-tracker" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Early Black Friday Deals Best USB-C Accessories for iPhone 15 All the ‘Best’ T-Shirts Put to the Test What to Do If You Get Emails for the Wrong Person Get Our Deals Newsletter Gadget Lab Newsletter Simon Hill Gear How to Sell Your Old Smartwatch or Fitness Tracker Photograph: Neil Godwin/T3 Magazine/Getty Images Save this story Save Save this story Save Whether you bought a new fitness tracker to get in shape or snagged a smartwatch to have notifications on your wrist, there’s a good chance your old one has been consigned to a drawer or closet. It’s not doing anyone any good languishing there, and the longer you leave it, the lower its value drops. Before it slips from memory entirely, why not spruce up your old smartwatch and sell or gift it to someone? Here, we'll run through how to prep your old fitness tracker and sell it for as much money as possible, gift it, donate it, or recycle it. If you don’t have a replacement yet, you can check out our guides to the Best Smartwatches or Best Fitness Trackers for ideas.
Special offer for Gear readers: Get a 1-year subscription to WIRED for $5 ($25 off).
This includes unlimited access to WIRED.com and our print magazine (if you'd like). Subscriptions help fund the work we do every day.
If you buy something using links in our stories, we may earn a commission. This helps support our journalism.
Learn more.
Before you wipe your wearable, make sure you've backed up the latest data so you don't lose anything. Most smartwatches and fitness trackers sync data automatically with a companion app on your phone. If you’re upgrading to a different model from the same manufacturer and plan to continue using the same app, do a final sync.
If you plan to change to a new brand of watch or tracker, you should export your data. The process for this depends on the device manufacturer. Here are some links to guides on how to export data for some of the biggest brands: Apple : Open the Health app, tap your profile at the top right, then Export All Health Data.
Fitbit : Sign in to Fitbit.com and go to Settings , Data Export.
Garmin : Visit Garmin Connect and go to Activities , All Activities , Export CSV.
Google : Go to Google Takeout, deselect all, and then Fit.
Samsung : Open the Samsung Health app and go to More options , Settings , Download personal data.
Withings : Click this link and sign in to download a CSV file with all data.
If you have decided which smartwatch or tracker you’re switching to, you can always search the app stores for a third-party app designed to transfer data between those services (there are several available). Once you have exported your data, consider deleting it from the old service if you no longer intend to use it.
The correct procedure to unpair and factory reset your smartwatch or fitness tracker depends on the manufacturer and model. Unpairing will often automatically trigger a factory reset. We recommend fully charging your device before you wipe it. Once you have wiped your smartwatch or fitness tracker, turn it off. Here are some handy links again: Apple Watch: Unpair and erase your Apple Watch and remember to turn off Activation Lock.
Fitbit: Here's how to erase a Fitbit device and remove it from your account.
Garmin: How to delete all information from your device and remove it from Garmin Connect.
Google Wear OS: Reset to factory settings and check it’s not listed in your devices.
Samsung: How to unpair and reset a Samsung smartwatch.
Withings: How to unpair and delete a ScanWatch (search support for other models).
Since it has likely been on your wrist through rain, shine, and many sweaty workouts, you should clean your device thoroughly. Use a microfiber cleaning cloth and some elbow grease to start. If that doesn’t do the trick, apply some warm water to the cloth to remove stubborn marks and follow up with a dry cloth. We have other applicable tips in our guide on how to clean your smartphone.
If you plan on selling your old smartwatch or fitness tracker, or even if you’re going to gift or donate it, then you should round up the charger, cable, and any other accessories that came with it. See if you can dig up the original box, too. Not only does it look more attractive to a buyer when it’s boxed up the way it was when you bought it, but the original box is also usually designed to keep the device safe for shipping.
You are finally ready to sell your smartwatch or fitness tracker. But where should you sell it? Selling directly is likely to net you the largest payout, but there is more hassle and risk involved.
Craigslist , Facebook Marketplace , and Nextdoor are all good for face-to-face sales. The beauty of these options is that they don’t charge you any fees and they can help you find a local buyer, but it’s up to you to negotiate a price and handle the exchange. It's a good idea to meet in public and bring a friend with you. Never give the buyer personal information, and be aware that some people will try to haggle when you meet, even if you have already agreed on a price.
You can find a larger market on eBay , where there is a brisk trade in old smartwatches and fitness trackers. There is a little uncertainty with the auction process, but looking at sale prices for similar devices will give you a good idea of what you are likely to get. Just remember that you must package and ship your device after it sells. Be honest, particularly if there are signs of wear on your device, or you will likely end up with a return or dispute. Our list of eBay tips may prove useful, though it's focused on buying on the service.
Swappa is a good alternative if you don’t like eBay.
Places like GadgetPickup , Trademore , and DeCluttr will offer you cash for your old smartwatch or fitness tracker. You get an offer based on the details you enter into a website, and the company provides free shipping or even prepaid shipping materials. The trouble with these companies is they frequently reduce the offer after they receive and inspect your device. It can also take a while to receive your funds. There’s no denying the convenience of selling to a company like this, but make sure you shop around and weigh the offer against customer reviews. The SellCell website is a handy aggregator that shows you offers from some of these services.
Best Buy , Amazon , Verizon , Samsung , Walmart , and many other companies allow you to trade in smartwatches (they don’t usually accept fitness trackers) for credit. In our experience, these offers tend to be low, but if you’re planning to buy from one of these companies, this is an easy way to get some money off. Trade-ins offer the same advantages in terms of a fixed offer and free shipping, but you may find they reduce the offer after inspecting your device. Some of the big retailers let you drop devices off in-store.
Consider gifting your old smartwatches or fitness trackers to family members and friends. You might also look at donating them to charity.
Recycle Health is a nonprofit that collects and refurbishes fitness trackers and provides them to underserved populations to encourage fitness. You can also donate old smartwatches or fitness trackers to Goodwill or find a local charity that accepts them.
If your old smartwatch or fitness tracker is broken beyond repair, then it’s time to recycle. Whatever you do, don’t throw that device in the trash. Most manufacturers have a recycling program, and some big retailers have recycling drop-off points for old electronics, including smartwatches and fitness trackers, but do a little research first.
E-waste is a growing problem, and some supposedly recycled products end up in hellish e-waste graveyards.
To find a responsible recycler near you, search the e-Stewards website.
📩 The latest on tech, science, and more: Get our newsletters ! The quest to trap CO 2 in stone—and beat climate change The trouble with Encanto ? It twerks too hard Here's how Apple's iCloud Private Relay works This app gives you a tasty way to fight food waste Simulation tech can help predict the biggest threats 👁️ Explore AI like never before with our new database ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Gear You’ll Be Able Buy Cars on Amazon Next Year Boone Ashworth Gear The Best USB Hubs and Docks for Connecting All Your Gadgets Eric Ravenscraft Gear The Best Black Friday Deals on Electric Bikes and Accessories Adrienne So Contributor X Topics how-to Fitness Trackers recycling money Shopping smartwatches Scott Gilbertson Scott Gilbertson Reece Rogers Boone Ashworth Carlton Reid Virginia Heffernan Boone Ashworth Boone Ashworth WIRED COUPONS Dyson promo code Extra 20% off sitewide - Dyson promo code GoPro Promo Code GoPro Promo Code: save 15% on your next order Samsung Promo Code +30% Off with this Samsung promo code Dell Coupon Code American Express Dell Coupon Code: Score 10% off select purchases Best Buy Coupon Best Buy coupon: Score $300 off select laptops VistaPrint promo code 15% off VistaPrint promo code when you sign up for emails Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
122 | 2,023 | "X’s Sneaky New Ads Might Be Illegal | WIRED" | "https://www.wired.com/story/xs-sneaky-new-ads-might-be-illegal" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Vittoria Elliott Business X’s Sneaky New Ads Might Be Illegal Photograph: JOSH EDELSON/Getty Images Save this story Save Save this story Save Last week, Mashable reported that on X (formerly Twitter), users were noticing a new type of advertisement : Minus a regular handle or username, the ad’s headline looks like a normal tweet, with the avatar a miniature of whatever featured image appears in the body of the post. There is no notification in the upper right-hand corner saying “Ad,” and users can’t click on the ad to see more about who paid for it.
“Dude what the fuck is this I can’t click on it there’s no account name there’s no username I’m screaming what the hell it’s not even an ad,” one user tweeted.
But Twitter’s new ad interface may be more than just annoying—it may be illegal.
Under Section 5(a) of the US Federal Trade Commission Act , companies are banned from using deceptive ad practices, meaning consumers must know that ads are, well, ads. For social platforms, this means that any native advertising, or advertising designed to look like content on the platform, needs to be clearly labeled.
“There’s really no doubt to us that X’s lack of disclosure here misleads consumers,” says Sarah Kay Wiley, policy and partnerships director at Check My Ads, an ad industry watchdog group. “Consumers are simply not able to differentiate what is content and what is not paid content. Even I’ve been duped, and I work in this space.” X did not immediately respond to a request for comment.
X has two feeds, a Following feed that is meant to show users content from accounts they follow and a For You feed that includes algorithmically recommended content from across the platform. Wiley says she has seen examples of this unlabeled ad content in both feeds. What’s more confusing is the fact that some other content is still labeled as ads. “It’s really egregious because some ads are still marked as ads,” says Wiley. “It really provides opportunities for fraudulent marketers to reach consumers.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg An FTC staff attorney with the agency’s ad practices division, who spoke to WIRED on condition of anonymity, says that the agency encourages platforms to use a consistent format for advertising disclosures in order to avoid confusing customers.
And Wiley says that if advertisers think X is doing the work of labeling their content when it’s not, they could also face compliance issues for not properly disclosing that their posts are ads. “The advertisers themselves are also victims,” she says.
It’s no secret that X has been scrambling to bring in ad revenue. After Elon Musk took ownership of the company, he publicly declared that it would roll back content moderation efforts and fired much of the staff responsible for this work. Brands, worried that their ads would appear next to disinformation or hateful content, began abandoning the platform. Musk has tried to turn the ship around, bringing in CEO Linda Yaccarino , an experienced advertising executive (who Musk has repeatedly undermined , acting in ways that go against her promises to make the platform safe for advertisers). But recent data shows that the platform has seen a 42 percent drop in ad revenue since Musk’s takeover. X has also begun selling ads via Google Ad Manager and InMobi, a marked shift from its historical practice of dealing with advertisers directly.
And it gets even more complicated—and thorny—for X. In 2011, then-Twitter was issued a consent decree , which would allow the government to take legal action against the company for not safeguarding user data, thereby making it vulnerable to hackers. As part of the settlement with the FTC, “Twitter will be barred for 20 years from misleading consumers about the extent to which it protects the security, privacy, and confidentiality of nonpublic consumer information,” the FTC stated at the time.
Showing users ads that they don’t know are ads would likely put the company in violation of this agreement, says Christopher Terry, associate professor of media law at the University of Minnesota.
“The whole point of putting native advertising is to slap a cookie on your computer that then makes you subject to all kinds of horrific other advertising,” says Terry. If X collects data from clicks on content that users don’t know are ads, that’s likely a violation of the company’s agreement to protect user data—and the consent decree, he says.
While it’s likely that the FTC could have grounds to come after X, Terry says he’s not sure the agency will prioritize the platform, noting the agency’s current focus on antitrust actions against Google and Amazon. In May, however, the agency ordered several social media companies, including X, to disclose how they were keeping fraudulent ads off their platforms. And Terry says that if the FTC decides to pursue X, Musk and his company could be in real trouble.
“You can screw with all these people. You can put up all this white supremacist content you want, but you really don’t want to mess with the FTC,” says Terry. “Because if they come knocking, you’re going to be really sorry you bought this company.” You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Platforms and power reporter Topics twitter Social Media Advertising David Gilbert Amit Katwala Kari McMahon Will Knight Khari Johnson Joel Khalili Joel Khalili Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
123 | 2,006 | "Privacy Debacle Hall of Fame | WIRED" | "https://www.wired.com/2006/08/privacy-debacle-hall-of-fame" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons WIRED Staff Security Privacy Debacle Hall of Fame Save this story Save Save this story Save Earlier this month AOL publicly released a data trove: 500,000 search queries culled from three months of user traffic on its search engine.
The company claimed it was trying to help researchers by providing "anonymized" search information, but experts and the public were shocked at how easy it was to figure out who had been searching on what. Apparently, AOL's anonymizing process didn't include removing names, addresses and Social Security numbers. Although the company has since apologized and taken the data down, there are at least half-a-dozen mirrors still out there for all to browse.
This may have been one of the dumbest privacy debacles of all time, but it certainly wasn't the first. Here are ten other privacy snafus that made the world an unsafer place. Despite the obvious flaws of rankings, we have attempted one as follows, in descending order: 10. ChoicePoint data spill: ChoicePoint, one of the largest data brokers in the world, in early 2005 admitted that it had released sensitive data on roughly 163,000 people to fraudsters who signed up as ChoicePoint customers starting in 2001. At least 800 cases of identity theft resulted. Sued by the FTC, the company paid $15 million in a settlement earlier this year -- at least $5 million of which goes to the consumers whose lives they ruined.
9. VA laptop theft: In May, two teenagers stole a laptop from the Department of Veterans Affairs that contained financial information on more than 25 million veterans, as well as people on active duty. Electronic Frontier Foundation staff attorney Kurt Opsahl said this is one of the worst data breaches in recent memory because of its sheer scale: "The database contained the names, Social Security numbers and dates of birth of as many as 26.5 million veterans and their families, though allegedly recovered without evidence of the thieves obtaining access." The case also raised awareness about how many unprotected, private databases are floating around on easily-stolen, mobile devices. When the laptop was recovered, it appeared that none of the data had been disturbed -- but only time will tell.
8. CardSystems hacked: In 2005 MasterCard revealed that one of its third-party processing partners, CardSystems, had lost data on over 40 million customers to online data thieves. Many of those customers were MasterCard holders. Worst of all, according to MasterCard reps, the data was stolen "by running a script." In other words, CardSystems had incredibly poor digital security and 40 million credit-card holders paid for it.
7. Discovery of data on used hard drives for sale: In 2003, security geek and MIT grad student Simson Garfinkel bought a batch of 20 used hard drives to test out some forensic data recovery techniques. He was dismayed to learn that many of these drives had not had their memories properly wiped: One still contained data from its days in an ATM machine, and two were packed with credit card numbers. He bought several dozen more used hard drives, and found that overall only about 10 percent had had their memories adequately wiped. In retrospect, Garfinkel is still shocked at what he found. "Most, if not all, of these cases would have been avoided if the laptops had been configured with cryptographic file systems," he said, adding that "any halfway-decent IT department" should be able to do that.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg 6. Philip Agee's revenge: The Judith Miller case may be fresh in our minds, but Miller's revelations about Valerie Plame pale in comparison with those of former CIA operative Philip Agee. After turning his back on a government agency he considered evil and corrupt, Agee fled to England and in 1975 published a book called Inside the Company.
It revealed the identities of nearly 250 CIA agents, and the U.S. government claimed it led to the executions of two who had been working undercover in Eastern Europe. In 1978 and 1979, Agee published two volumes called Dirty Work , which contained details on over 2000 CIA agents. Today, Agee lives in Havana, and runs a website that helps U.S. citizens travel to Cuba.
5. Amy Boyer's murder: In 1999, a stalker named Liam Youens paid New Hampshire-based internet investigation firm Docusearch roughly $150 to get the Social Security number and workplace address of Amy Boyer. He'd been obsessed with Boyer since high school, and had created a website that detailed his plans to destroy her. With the data provided by Docusearch, Youens was able to hide outside Boyer's office and shoot her to death before killing himself. His terrible crime wound up creating a good law : In 2003, the New Hampshire Supreme Court held that investigation firms can be held liable for harms they cause by divulging personal information.
4. Testing CAPPS II: In late 2003, JetBlue and Northwest Airlines confessed that for the past two years they had been giving personal data from millions of airline passengers to NASA and the TSA. The two agencies were data mining the information as part of their research on a new passenger threat-assessment program called Computer Assisted Passenger Prescreening System, or CAPPS II. The data included addresses, phone numbers and credit card numbers. After public outcry over the TSA's use of private passenger data to "test" the beta version of CAPPS II, the program was terminated in 2004. It has been replaced by a similar program called Secure Flight.
3. COINTELPRO: From 1956 to 1971, the FBI's secret counterintelligence program COINTELPRO worked to undermine what the agency deemed "politically radical" groups, usually by infiltrating those groups and gathering sensitive information about their members. Among COINTELPRO's targets was Martin Luther King, who was placed under illegal surveillance and harassed. COINTELPRO was unmasked in 1971, when a group of leftists called The Citizens' Commission to Investigate the FBI broke into a field office and stole some documents detailing COINTELPRO's activities. Subsequent Congressional investigations into COINTELPRO's antics led to widespread condemnation of the program. Sen. Frank Church, who headed up the investigation, concluded: "The Bureau conducted a sophisticated vigilante operation aimed squarely at preventing the exercise of First Amendment rights of speech and association, on the theory that preventing the growth of dangerous groups and the propagation of dangerous ideas would protect the national security and deter violence." Many COINTELPRO documents remain classified to this day.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg 2. AT&T lets the NSA listen to all phone calls: Earlier this year, a whistle-blower at AT&T revealed that the telco giant had been routing all U.S. phone calls and internet traffic to the NSA as an antiterrorism measure. The agency had gotten similar data from other major telcos in the country -- only Qwest had refused. Investigations, mostly conducted by journalists, revealed that every single phone call made in the U.S. over the five years of the NSA domestic spying program had essentially been tapped. Internet traffic suffered the same fate. AT&T is currently being sued in numerous class action suits on behalf of its customers for illegally handing over private data to the government. The cases were recently consolidated in San Francisco federal court. (Disclosure: Wired News has intervened in one of these cases and is seeking to make public evidence filed under seal.) 1. The creation of the Social Security Number: Although security blogger Adam Shostack is known for his expertise on information-age data leaks, he considers the creation of the Social Security Number in 1936 to be the "largest privacy disaster in the history of the U.S." Referencing controversy over the card's creation at the time, he said, "Ironically, privacy advocates warned that the number would become a de facto national ID, and their concerns were belittled, then proven right, setting a pattern that still goes on today." Andy Greenberg Dell Cameron Dell Cameron Matt Burgess Dell Cameron Lily Hay Newman Matt Burgess Dell Cameron Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
124 | 2,019 | "The Bonkers Tech That Detects Lightning 6,000 Miles Away | WIRED" | "https://www.wired.com/story/lightning-tech" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science The Bonkers Tech That Detects Lightning 6,000 Miles Away Drew Angerer/Getty Images Save this story Save Save this story Save If lightning strikes a hundred miles from the North Pole, and no one is around to hear it, does it make a sound ? Yes, because there’s a global array of sensors that’s always listening, pinpointing lightning strikes in time and space from as far away as 6,000 miles.
In June and this past weekend, the North Pole played host to rare thunderstorms , an event that may become less rare as climate change ramps up. And it would have gone entirely unnoticed by faraway humans if it weren’t for the assistance of a company called Vaisala, which operates the sensor network and uses it to triangulate a lightning strike, feeding the data to outfits like the National Weather Service. “This is a relatively new system, and so our ability to detect lightning that far north has drastically improved over the last 5 to 10 years,” says Alex Young, a meteorologist with the National Weather Service in Fairbanks, Alaska. “As opposed to: who knows if an event like this happened 30 years ago?” First we need to talk about how lightning forms.
When the Sun heats the Earth's surface, air and moisture rise and create water droplets. With enough solar energy, the warm, wet air keeps rising and rising, while the same time, cold air in the system is sinking—leading to a swirling mass called a deep convective cloud, which builds electrical charges that escalate into lightning. Usually Arctic air doesn’t hold enough heat to get all that convection. But in these times of climate change, nothing is normal anymore.
Luckily for Vaisala, lightning betrays itself in a number of ways. We humans know it by the flash of light and the deafening sound, but what our bodies don’t notice is that the massive electrical current of a lightning strike generates radio bursts. For a fleeting moment, a lightning bolt works like a giant, rambunctious radio tower. “If you have a lightning discharge that hits the ground, you might have a channel of charge that's a few miles long,” says Ryan Said, a research scientist at Vaisala. “And that essentially acts as a temporary antenna in the sky.” Vaisala Vaisala Still, if it weren’t for a quirk in our atmosphere, this signal would be difficult to detect. But the ionosphere—an ionized layer in Earth's upper atmosphere—reflects a significant amount of the radio signal back to the ground for Vaisala’s devices to detect. Think of these like bigger, more sensitive versions of a loop antenna for receiving AM broadcasts. “If we have a sensitive enough receiver, we can detect these radio emissions at global distances,” says Said. “That's how, with dozens of receivers around the world, we can monitor lightning anywhere, including up into the Arctic.” (See above for a visualization of strikes around the world.) The trick lies in essentially triangulating the signal. “We measure the time at which these radio bursts reach the sensors and the direction,” notes Said. If a lightning bolt’s radio burst hits at least three sensors in Vaisala’s synchronized global network, the system can pinpoint when and where the signal originated. Vaisala can even translate the radio signal into sound for our human ears, which you can hear here.
(Each pop is a single lightning strike.) Not that this signal is easy to parse, mind you. You’ve got to account for the reflections off the ionosphere, for instance. So the bulk of the company’s effort, Said explains, “is devoted to properly interpreting those signals so that we can extract reliable information from them.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Reliability is paramount, because it’s not just the National Weather Service that uses Vaisala’s data. Airports appreciate knowing if a thunderstorm is incoming to plan for delays or cease fueling operations. The system can even work on a forensic level too, perhaps to discern if a lightning strike may have started a wildfire.
So if lightning thinks it can just strike willy-nilly and still escape notice, it’s got another thing coming.
Update, 8/28/19, 1:30 pm ET: After further analysis, Vaisala has determined that lightning struck even closer to the North Pole in an earlier storm on June 28, just 110 miles away compared to 300 miles away in the August 10 storm. This story has been updated to reflect that new figure.
3 years of misery inside Google , the happiest place in tech Hackers can turn speakers into acoustic cyberweapons The weird, dark history of 8chan and its founder 8 ways overseas drug manufacturers dupe the FDA The terrible anxiety of location sharing apps 👁 Facial recognition is suddenly everywhere.
Should you worry? Plus, read the latest news on artificial intelligence 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Staff Writer X Topics climate change Max G. Levy Matt Simon Max G. Levy Grace Browne Dhruv Mehrotra Dell Cameron Amit Katwala Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
125 | 2,020 | "The Arctic Is Getting Greener. That's Bad News for All of Us | WIRED" | "https://www.wired.com/story/arctic-greening" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science The Arctic Is Getting Greener. That's Bad News for All of Us 1 / 6 Save this story Save Save this story Save Right now the Arctic is warming twice as fast as the rest of the planet, and transforming in massively consequential ways. Rapidly melting permafrost is gouging holes in the landscape.
Thousands of years’ worth of wet accumulated plant matter known as peat is drying out and burning in unprecedented wildfires.
Lightning—a phenomenon more suited to places like Florida— is now striking within 100 miles of the North Pole.
All the while, researchers are racing to quantify how the plant species of the Arctic are coping with a much, much warmer world. In a word, well.
And probably: too well.
Using satellite data, drones, and on-the-ground fieldwork, a team of dozens of scientists—ecologists, biologists, geographers, climate scientists, and more—is finding that vegetation like shrubs, grasses, and sedges are growing more abundant. The phenomenon is known as “Arctic greening,” and with it comes a galaxy of strange and surprising knock-on effects with implications both for the Arctic landscape and the world’s climate at large.
Despite its icy reputation, the Arctic isn’t a lifeless place. Unlike Antarctica, which isn’t home to trees or to many animals that you can see without a microscope , the Arctic is teeming with life, particularly plants. Its grasses and shrubs are beautifully adapted to survive winters in which their days are completely lightless, because the vegetation lies covered in a layer of snow, surviving mostly underground as roots. When the thaw comes, the plants have perhaps a month to do everything they need to survive and reproduce: make seeds, soak up nutrients, gather sunlight.
But as the world has warmed over the past few decades, satellites have been watching the Arctic get greener—with various levels of precision. One satellite might give you the resolution on the scale of a football field, another on the scale of Central Park. These days, the resolution of fancy modern cameras might be 10 by 10 meters. But even then, ecologists can’t decipher exactly what these plant communities look like without being on the ground.
By Katie M. Palmer and Matt Simon First, the Arctic is dark 24 hours a day in the winter. “That's a long-running challenge of using satellites in that part of the world,” says Jeffrey Kerby, an ecologist and geographer formerly at Dartmouth College and now at the Aarhus Institute of Advanced Studies. He was one of the co-lead authors on a recent paper on Arctic greening published in Nature Climate Change by this international group of scientists, who received funding from the National Geographic Society and government agencies in the UK, North America, and Europe.
And even when you get 24 hours of light in the summer, it’s a problematic kind of light. “Because the sun is so low, it can cast big shadows all over the place, and people generally aren't interested in studying shadows,” Kerby says.
So with the help of small drones the team launches right from the field, researchers have been scouring landscapes to decode in fine detail how the Arctic is transforming, and marrying that with the data coming from the eyes in the sky. A drone can get close enough to the ground to tell them which plants might be benefiting in a particular landscape as it warms. The researchers can also quantify how an area is changing year over year by having the drones photograph the same regions, and by deploying, of all things, tea bags. “We stick tea bags in the ground, and over one year, two years, etc., and see how much of that gets decomposed across these different microclimates,” says Isla Myers-Smith, a global change ecologist at the University of Edinburgh and co-lead author on the new paper.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg They’re finding that the change isn’t driven by invasive species moving into the Arctic to exploit the warming climate. It’s more that taller native species like shrubs are becoming more abundant. “It means that canopy heights are taller as a whole, and that has significant implications,” says Myers-Smith. “It might be starting to influence the way the tundra plants protect the frozen soils and carbon below.” For instance, taller shrub canopies trap more snow in the winter, instead of allowing the stuff to blow around the tundra. This snow might build into an insulating layer that could prevent the cold from penetrating the soil. “So that accelerates—potentially—the thaw of permafrost,” says Myers-Smith. “And you can also change the surface reflectance of the tundra when you have these taller plants, if they stick up above the snowpack.” Vegetation is darker than snow, and therefore absorbs more heat, further exacerbating the thaw of the soil.
Thawing permafrost is one of the most dreaded climate feedback loops. Permafrost contains thousands of years of accumulated carbon in the form of plant material. A thaw—perhaps exacerbated by more abundant vegetation—threatens to release more CO 2 and methane into the atmosphere. More carbon in the atmosphere means more warming, which means more permafrost thaw, ad infinitum—or at least until the permafrost is gone.
Permafrost thaws, and the land slumps Photograph: Gergana Daskalova/National Geographic Society Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A permafrost melt also releases more water into the soil, leading to yet more knock-on effects for the vegetation. “When the ground is frozen, plants don't have any access to water,” says Kerby. “So it's almost like being in a desert for part of the year.” Frozen ground limits when the plants can grow. But an earlier thaw could mean that plants kickstart their growth earlier in the year. As those soils thaw deeper and deeper, they will also release gobs of nutrients that have been trapped underground for perhaps thousands of years, supercharging the growth of these increasingly abundant Arctic plant species. This means the landscape could get even greener and even more hospitable to plants that can take advantage of warmer temperatures.
And really, underground is where so much of the Arctic mystery still lies: In these tundra ecosystems, up to 80 percent of the biomass is below ground. (Remember that in the deep chill of winter, roots survive underground.) “So when you see the green surface, that's just the tip of the iceberg, in terms of the biomass in these systems,” says Myers-Smith. “So it could be that a lot of the climate change responses of these plants are actually all in the below-ground world that we have a very difficult time tracking and monitoring.” Another big unknown is how animal species—big and small—fit into a warmer, greener landscape. How might tiny herbivores like caterpillars take to an increasingly lush Arctic? How might large herbivores like caribou exploit the vegetation bounty, and might it even influence their migratory patterns, potentially threatening an important source of food for native people? And how might all these herbivores hoovering up the extra vegetation affect the carbon cycle? That is, the natural movement of carbon from soil to animals to the atmosphere.
For the scientists, the really worrying bit is the fact that there’s twice as much carbon in permafrost as there is in the atmosphere. “That's a lot of carbon that has been sitting there for thousands of years, kind of locked up in ice,” says Kerby. “And as that permafrost starts to thaw, microbes can start digesting all of the dead leaves and dead animals.” The greening of the Arctic could already be exacerbating this thaw.
It might seem weird for humans to be rooting against plants. But sometimes greener pastures aren’t a good thing.
Algae caviar, anyone? What we'll eat on the journey to Mars A code-obsessed novelist builds a writing bot.
The plot thickens Chris Evans goes to Washington The best meal kit delivery service for every kind of cook The fractured future of browser privacy 👁 The secret history of facial recognition.
Plus, the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Staff Writer X Topics climate change Max G. Levy Matt Simon Max G. Levy Grace Browne Dell Cameron Amit Katwala Dhruv Mehrotra Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
126 | 2,020 | "Why Facebook Censored an Anti-Trump Ad | WIRED" | "https://www.wired.com/story/plaintext-why-facebook-censored-an-anti-trump-ad" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Business Why Facebook Censored an Anti-Trump Ad Facebook labeled an anti-Trump ad as “partly false" and dramatically depressed its circulation when users tried to share the video for free.
Photoraph: Doug Mills-Pool/Getty Images Save this story Save Save this story Save Hello again. Another week of tough news. At least you’ve got Plaintext, and at least I’ve got you readers. Let’s stick together.
For now, this weekly column is free for everyone to access. Soon, only WIRED subscribers will get Plaintext as a newsletter. You’ll get to keep reading it in your inbox by subscribing to WIRED (discounted 50%) , and in the process getting all our amazing tech coverage in print and online.
On May 4, a group of disaffected Republicans known as the Lincoln Project posted an ad on Twitter, YouTube, and Facebook. Inspired by Ronald Reagan’s classic 1984 “Morning in America” ad, the Lincoln Project’s “Mourning in America” recited a litany of grim statistics with depressing images of pandemic America, laying the blame on President Trump. The president was not happy, attacking the ad that very evening.
A day later, Facebook labeled the ad “partly false,” rejected it as inappropriate, and dramatically depressed its circulation when users tried to share the video for free.
If you have been following Mark Zuckerberg’s statements on political advertising, this might seem puzzling. Despite criticism, he has articulated a public policy of not filtering or even fact-checking political advertisements on the platform. It’s up to users to decide the truth for themselves. “I don’t think that a private company should be censoring politicians or news,” he told Gayle King on CBS.
So why did Facebook refuse to run "Mourning in America" as an ad, and bury it otherwise? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Courtesy of Facebook The reason, explains Facebook spokesperson Andrew Stone, is that the Lincoln Project is not an ad from a campaigning politician, but an outside organization. If candidates for public office pay Facebook to circulate even demonstrably false claims, Facebook will happily place it in the News Feeds of a targeted audience. But if the advertiser is not running for office, Facebook will append a scarlet letter to ads identified as making exaggerated claims and misstatements.
But wait: The "Mourning" ad seems accurate. Online critics wondered whether Facebook—whose handling of misinformation in the 2016 election seemed to benefit the Trump campaign—was doing the White House a favor in censoring the Lincoln ad.
The truth is not so nefarious but not terribly comforting, either.
Facebook relies on outside fact-checking organizations to determine the truthfulness of controversial content. These operations choose what stories to vet, either by identifying controversial content or by selecting from a dashboard of popular content provided by Facebook. In this case, Politifact, the fact-checking branch of the nonprofit Poynter Institute, decided to look at the ad, which had instantly garnered a lot of attention.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg According to Aaron Sharockman, Politifact’s executive director, his fact-checker didn’t have a problem with any of the many statistics in the ad about the coronavirus death toll or unemployment numbers. Instead, Politifact zeroed in on one sentence: “Donald Trump bailed out Wall Street but not Main Street.” To many (like me) this may seem like an opinion, whose worth depends on data. The Lincoln Project provided a number of sources, including Bloomberg, NBC, Vanity Fair , and even the New York Post , where a Fox Business News reporter wrote in an op-ed , “Wall Street traders will make money, while Main Street businesses face economic conditions not seen since the Great Depression.” But Politifact chose an absolutist interpretation. Because the Cares Act passed by Congress and signed by Trump did some things for Main Street, it reasoned, in effect Trump had indeed bailed out mainstream America. “Most people who argue seem to suggest that maybe Trump has bailed out Wall Street more than Main Street,” says Sharockman. “But that's not what the ad said. So I feel real good about the rating as calling it false.” On Facebook the warning label read “Partly False.” I asked Sharockman, since every other sentence was indisputably factual, why the ad wasn’t labeled “Mostly True.” He told me that the only alternatives were “True, Partly False, and False.” (Later I did my own fact-check: Politifact’s website describes a “Truth-o-Meter” that includes categories like “Mostly True” or “Half True.” Apparently, Facebook accepts only the three that Sharockman mentions.) Fact-checking of political ads can be more art than science; it often rests on slender distinctions. But as Politifact knows, the power of those distinctions can become grotesquely distorted when they are translated to labels that Facebook blindly applies. The penalties are severe. When an ad is deemed False or Partly False by a fact-checking organization, Facebook will pull the advertisement. Even worse, when people share the ad with friends, it is treated as toxic content and buried in the News Feed. When people do see the post, they must click through the warning label to view the actual ad, as if it contained gory medical scenes or other disturbing content.
The Lincoln Project complained to Facebook and got no formal response beyond a referral to the fact-checkers. Politifact’s stance is that the organization should have changed the ad to say that Trump helped Wall Street more than Main Street. Sharockman says the fix would take only six seconds. Jennifer Horn, a cofounder of the Lincoln Project, says that her organization will not bow to censorship. (There is also the fact that this controversy has made the ad even more effective—the Lincoln Project says that it has been a champion in eliciting donations.) She notes that YouTube and Twitter have not objected to the content of the ad, and several television markets have run it without asking for changes.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Only Facebook has effectively banned it. Ironically, If Joe Biden had placed an identical ad, Facebook would not have impeded it or given it a warning label. Even if Biden said that Trump personally breathed deadly germs on 60,000 Americans, Facebook would have let that stand. Complete political freedom! Or, depending on who is speaking, censorship by nitpick.
In 2007, Facebook announced its first significant ad strategy.
I wrote about it in Newsweek.
Incidentally, what I did not know at the time was that the internal codename for the ad products announced in the fall of 2007 was “Pandemic": With its new program, Facebook announced that it was empowering advertisers to target those ads using the information on the personal profiles that members supply to Facebook. A national advertiser could sell ads to a huge group (all women between 25 and 40), or a local advertiser, like a restaurant, could pay much less to reach a microgroup (Ivy League-educated Indian-food lovers in a specific ZIP code). You could even target people who work for a specific company; Facebook itself has used this feature to solicit employees from its competitors. It's an innovative strategy. But since Facebook users originally supply that information to share with friends, and not with advertisers, they may believe that utilizing those details for ad targeting wasn't part of the deal. (Facebook's privacy officer, Chris Kelly, says that the personal data itself is not given to the advertisers.) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Max asks, “Hey Steven! I saw your Plaintext posted on the site today, and I was trying to figure out how I actually subscribe to get it sent to me. I've been a subscriber of WIRED for a few months now, and I'm sad to say I wasn't subscribed to Plaintext, so how do I do that?” Max, WIRED subscribers should get Plaintext automatically—unless they opted out of getting mail from us, which is their absolute right. Maybe when you opted out you didn’t realize that you would be turning down Plaintext in your inbox every week. In any case, we hope to build an option that lets opted-out subscribers sign up for this newsletter. Can’t say when, but it’s on the list. And as a bonus for pointing this out to us, we’ve managed to get you signed up right now! Just don’t hit “Unsubscribe” by mistake.
You can submit questions to mail@wired.com.
Write ASK LEVY in the subject line.
Ever since the Supreme Court began conducting oral arguments by phone, Justice Clarence Thomas has been asking more questions than he has for over a decade. That would mildly qualify as a sign of apocalypse on its own, but this week he invoked the name Frodo Baggins as the choice of a hypothetical electoral college voter. End-Times gold! Andy Greenberg’s epic story of a hacker who saved the internet and then got busted by the FBI is worth a good chunk of your lockdown time.
Can you teach a computer common sense? The scientist who helped IBM’s Watson win Jeopardy! is attempting to do just that.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg What do you do when your graduation event is voided by Covid-19? Get your diploma in Minecraft , of course.
Want to be more depressed than you already are? Engage in what I’m calling Empty City Porn. Here’s some stunning examples of depopulated New York City locations from photographer Natan Dvir. (He wore a mask, in case you’re wondering.) Don't miss future subscriber-only editions of this column.
Subscribe to WIRED (50% off for Plaintext readers) today.
The confessions of Marcus Hutchins, the hacker who saved the internet Who invented the wheel? And how did they do it ? 27 days in Tokyo Bay: What happened on the Diamond Princess Why farmers are dumping milk, even as people go hungry Tips and tools for cutting your hair at home 👁 AI uncovers a potential Covid-19 treatment.
Plus: Get the latest AI news 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Editor at Large X Topics Plaintext Donald Trump politics ads Facebook David Gilbert David Gilbert Kari McMahon Will Knight Khari Johnson Amit Katwala Andy Greenberg Amit Katwala Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
127 | 2,019 | "Monterey Bay Is a Natural Wonder—Poisoned With Microplastic | WIRED" | "https://www.wired.com/story/monterey-bay-microplastic" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Monterey Bay Is a Natural Wonder—Poisoned With Microplastic Eva Hambach/AFP/Getty Images Save this story Save Save this story Save California’s Monterey Bay is one of the more pure, more dynamic coastal ecosystems on Earth. Otters—once hunted nearly to extinction—float among towering kelp forests, which themselves have rebounded thanks to the booming otter population’s appetite for kelp-loving sea urchins. Great whites visit from time to time, as do all manner of whales and dolphins. All told, it’s one of the greatest success stories in the history of oceanic conservation.
Yet it’s poisoned with a menace no amount of conservation can stop: microplastic. Today in the journal Nature Scientific Reports , researchers present a torrent of horrifying findings about just how bad the plastic problem has become. For one, microplastic is swirling in Monterey Bay’s water column at every depth they sampled, sometimes in concentrations greater than at the surface of the infamous Great Pacific Garbage Patch. Two, those plastics are coming from land, not local fishing nets, and are weathered, suggesting they’ve been floating around for a long while. And three, every animal the researchers found—some that make up the base of the food web in the bay—were loaded with microplastic.
To get their samples, the researchers used ROVs outfitted with specialized samplers, which pumped large volumes of seawater through a mesh filter. Plastics are so ubiquitous in human inventions, however, that they had to make sure the ROV itself didn’t taint their samples.
The researchers found the amount of microplastic captured at the surface is about the same as it is down at 3,200 feet. But between 650 and 2,000 feet, the counts skyrocket.
Scientists have suspected that ocean plastics aren’t necessarily concentrated at the surface, contrary to what you’d assume given the Great Pacific Garbage Patch.
This is one big reason why they’ve scoffed at the idea of the Ocean Cleanup project, which is essentially a giant tube for catching surface plastic. It snapped shortly after its deployment in the Patch. But until now no one has gathered good data on what that distribution of plastic looks like up and down the water column.
“We know how much plastic is going into the ocean, and we kind of have a rough idea of what's at the surface of the ocean, but those numbers don't really match,” says oceanographer Kim Martini, who wasn’t involved in this work. “So from a budgeting point of view, it has to go someplace else, and we think it goes to the deep ocean. This is another piece of that puzzle.” Notice the stunning increase in the number of microplastic particles collected starting at 200 meters deep.
Choy et al./Nature Scientific Reports A still outstanding piece of that puzzle, though, is where this microplastic is coming from. By running tests in the lab, the researchers found that most of the particles they collected were PET, a component of single-use plastics. Then the question becomes, where are things like plastic bottles breaking down into microplastic in the sea? Does it happen at the surface, or do the bottles sink and then break down? How do the tiny particles swirl in currents? All important questions for future research.
What was clear from this work, though, is that the microplastic is weathered, suggesting particles had been floating around for perhaps years. “Just like a library book that's been in circulation for 20, 30 years versus something that's shrink-wrapped that's just come in the mail to your front doorstep, their condition is very different, though they're the same book,” says Kyle Van Houtan, chief scientist at the Monterey Bay Aquarium and a coauthor on the new paper.
These old plastics aren’t just floating around harmlessly—they’re making their way into animals. The researchers concentrated on two species, pelagic red crabs and giant larvaceans, bizarre critters that make mucus nets to catch food. They found that all specimens carried microplastic, suggesting that both currents and animals transport plastic around the ecosystem.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Take the pelagic red crab. “It’s like popcorn shrimp for bluefin tuna, humpback whales, migratory birds like albatross,” says Van Houtan. When a pelagic red crab becomes someone’s lunch, it can bring microplastic from the depths up to the surface.
A pelagic red crab at right.
Photograph: Monterey Bay Aquarium And the giant larvaceans. They periodically discard their mucus nets—and the plastics those nets have collected—which then sink. “That's a vehicle to take a lot of plastic out of the water column and inject it into the bottom of the ocean,” says Van Houtan. “So even though most of the plastic we found was far below the surface, there are so many mechanisms to take that plastic out of the water column and inject it into the seafloor or inject it into the surface food web.” Not helping matters is the fact that Monterey Bay is an extraordinarily productive ecosystem. “The largest migration on the planet is not birds flying south from the forests of North America to the tropics every year,” says Van Houtan. “It’s the vertical migration that happens every day in the ocean, where everything from zooplankton to even air breathers move up and down the water column.” During the day, smaller, more vulnerable organisms retreat to the darkness of the depths and return to the surface under the cover of darkness. In doing so, they’re dragging the food web through the water column, unwittingly spreading the plague that is microplastic.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Only recently have researchers begun to test what happens when creatures ingest the stuff. “They've reported effects on kidney function, liver function, reproductive effects, but these are mainly in laboratory settings,” says Scripps Institution of Oceanography researcher Anela Choy, lead author on the new paper. “So how that extrapolates to the real world, we're not quite there yet.” Organisms might not need to ingest microplastic to be affected by it. Last month, researchers published a paper showing how chemicals that plastics leach into the water, known as leachates, inhibit the growth of the marine bacteria that provide perhaps 20 percent of the air we breathe and also capture carbon from the atmosphere. But this too was done in the lab, so it’s hard to tell yet if it’s a problem out in the wild.
What is increasingly clear, though, is that few places on Earth seem to left untouched by plastics. Even supposedly pristine mountaintops collect microplastic blowing in the wind.
Short of someone inventing a magnet that somehow attracts microplastic, there’s no way we can rid Monterey Bay of this disease. But this new analysis points a big finger at who we can hold responsible: makers of single-use plastics.
“I think cleaning up is not the first step we should take,” says Martini. “The first step we should really take is we should treat plastic like another pollutant, because it is. We should regulate it like that, and we should make manufacturers responsible for their own pollutants in this case.” Short of humanity completely phasing out plastics, Monterey Bay will never be the same again. Once again, we’ve failed a treasure of the natural world.
My glorious, boring, almost-disconnected walk in Japan What do Amazon's star ratings really mean? Drugs that boost circadian rhythms could save our lives The 4 best password managers to secure your digital life What tech companies pay employees in 2019 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team's picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
📩 Get even more of our inside scoops with our weekly Backchannel newsletter Staff Writer X Topics environmental science Max G. Levy Max G. Levy Matt Simon Grace Browne Dell Cameron Amit Katwala Dhruv Mehrotra Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
128 | 2,019 | "You’ve Been Drinking Microplastics, But Don’t Worry—Yet | WIRED" | "https://www.wired.com/story/microplastic-who-study" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science You’ve Been Drinking Microplastics, But Don’t Worry—Yet Robert Taylor/Alamy Save this story Save Save this story Save Scientists have begun to expose a global horror show : microplastic pollution. Tiny bits of plastic have been showing up in unlikely places, including Arctic ice floes.
The particles are blowing in the air, so we’re breathing microplastic and eating it and drinking plastic-infused water.
The implications for human health are potentially huge.
Potentially.
The problem is that little is known about how microplastics affect the human body. That makes things difficult for the World Health Organization, which today released an exhaustive report on the state of research on microplastics in drinking water. The takeaway: As the limited science stands now, there’s no evidence that drinking microplastics is a threat to human health.
“We know from the data that we've reviewed that we're ingesting them, and we know that's caused concern among consumers,” says Bruce Gordon, who helped assemble the report as a coordinator with the WHO. “The headline message is to reassure drinking-water consumers around the world that based on our assessment of the risk, that it is low.” The report urges the scientific community to further study the potential impact of microplastics on human health, and fast. And it pleads for the world at large to rein in its plastic pollution catastrophe, because human beings aside, microplastics have poisoned even remote reaches of this planet. They’re swirling deep in ocean currents and showing up in the seafood we eat.
The pervasiveness of microplastic particles is horrifying, and there’s no way we can scrub the planet of them.
“What we don't know is enormous,” says University of Strathclyde environmental pollution scientist Deonie Allen, who wasn’t involved in the report.
Humans produce an astounding amount of plastic—nearly 400 million tons of the stuff in 2015, and production is expected to double by 2025. An estimated 8 million tons enter the ocean every year, yet researchers can only account for 1 percent of that. The rest has seemingly disappeared.
Microplastics are getting into drinking water in a number of ways. Some of it is carried in the air—“city dust,” as it's called, all the particles flying off shoes and tires and whatnot—and landing in freshwater sources like reservoirs. Plastic trash gets in there as well, growing brittle as it bakes in the sun, and breaking down over time into tinier and tinier pieces. Textiles like yoga pants slough off microplastic fibers, which flow out with laundry water.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Freshwater sources are, of course, treated before being distributed to customers, which removes most of the microplastic, the new report says. But it also cautions that in the developing world, people don’t always have access to this kind of water treatment. Also, treatment equipment that is itself made of plastic may contribute microplastics to the water supply.
At this early stage of research, the number of studies is small, and researchers have not yet settled on consistent methodologies. The nine studies compiled by the WHO report reflect the scattered nature of the work so far. Some looked at bottled water, others tap water. Some filtered their water samples down to micron-scale particles, others included particles 100 times bigger than that. Some determined the types of plastic they found, others didn't. Unsurprisingly, the level of contamination they report ranges from zero to thousands of particles per liter. The upshot is that the findings are almost impossible to compare.
Then there’s the range of effects the particles might have in the human gut.
The WHO report notes that most microplastic particles appear to pass through harmlessly. But we need more research about how the size of the particles affects their passage, or if gut tissue might absorb the smaller ones. And then there’s the stuff that comes along with plastic— the chemicals they leach, known as leachates , and also the foreign organisms like bacteria and viruses, known as biofilm, that may hitch a ride on the particles.
That’s a whole lot of unknowns around microplastics, and the WHO stresses that when it comes to drinking water, we have plenty of well-documented problems to worry about. “We need to keep the focus on known risks,” says Gordon. “We know now from our WHO data and UNICEF data that 2 billion people drink water currently that is fecally contaminated, and that causes almost 1 million deaths per year. That has got to be the focus of regulators around the world.” Meanwhile, people the world over will continue to drink and eat and breathe microplastics, as scientists work frantically to better understand the potential impacts on human health. We live on a plastic planet now, and we have to prepare ourselves for the reckoning.
3 years of misery inside Google , the happiest place in tech Hackers can turn speakers into acoustic cyber weapons The weird, dark history of 8chan and its founder 8 ways overseas drug manufacturers dupe the FDA The terrible anxiety of location-sharing apps 👁 Facial recognition is suddenly everywhere.
Should you worry? Plus, read the latest news on artificial intelligence 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Staff Writer X Topics environment Max G. Levy Matt Simon Grace Browne Amit Katwala Max G. Levy Dell Cameron Dhruv Mehrotra Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
129 | 2,019 | "Baby Fish Feast on Microplastics, and Then Get Eaten | WIRED" | "https://www.wired.com/story/baby-fish-are-feasting-on-microplastics" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Baby Fish Feast on Microplastics, and Then Get Eaten Photograph: Matt Porteous/Getty Images Save this story Save Save this story Save Teeming off Hawaii’s famous beaches is a complex web of life—sharks, turtles, seabirds—that relies enormously on tiny larval fish, the food for many species. In their first few weeks of existence the larvae are at the mercy of currents, still too puny to get around on their own, gathering by their millions in surface “slicks” where currents meet. And it’s here where they’re increasingly meeting a pernicious, omnipresent foe and mistaking it for food: microplastic.
Researchers today published an ominous report showing that these slicks pack 126 times the concentration of microplastic as nearby surface waters, and eight times the density of plastic as the Great Pacific Garbage Patch.
Microplastic particles outnumber larval fish in the slicks by a factor of seven to one, and dissections of the larvae reveal that many have plastic in their bellies. The consequences, both for these species and the food web as a whole, are downright terrifying.
“Seabirds feed on larval fish, adult fish feed on larval fish—it's a prominent food source,” says NOAA oceanographer Jamison Gove, co-lead author on the new paper, published in the Proceedings of the National Academy of Sciences.
“So that clearly has implications for how plastics can be distributed and quickly get higher up the food chain.” Gove and his colleagues dissected hundreds of larval fish and found that 8.6 percent of specimens from slicks—which appear as smooth ribbons on the surface—contained microplastics, more than twice the rate as larvae in nearby non-slick surface waters. Less than 10 percent may not sound like much, but we’re talking about innumerable little larvae out there in the slicks, so that percentage translates into a huge population of tainted organisms.
A larval flying fish (top) and triggerfish (bottom). Their ingested plastics are zoomed in.
Courtesy of Jonathan Whitney/NOAA Fisheries Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg These larvae don’t yet have fully developed immune systems to deal with ingested microplastics, which is particularly worrisome when you consider that the particles are known to accumulate pathogens like bacteria as they float around the sea. “One possibility is that because larval stages are so vulnerable, eating one piece of plastic could actually potentially kill them,” says NOAA marine ecologist Jonathan Whitney, co-lead author on the paper. It’s possible that far more larvae might be eating microplastics, perishing, and sinking to the bottom of the sea than scientists know.
The larvae might be mistaking plastics for some of their more common foods—other species of plankton that float around on currents. Most of the ingested particles were transparent or blue, the same color as their prey, such as tiny crustaceans called copepods. Nearly all of the consumed microplastics were fibers, from sources like plastic fishing nets, which slough off fibers that resemble the antennae of copepods.
The researchers also found that different species of larval fish had different ingestion rates. “That's really interesting,” Gove says, “because what I think it implies is that either different fish potentially have larger eyes or some other adaptation that they can distinguish between plastics and their prey better, or their food source is different.” Either way, microplastics have entered Hawaii’s oceanic food chain in a big way. The researchers found that species like mahi-mahi and swordfish are readily ingesting the stuff as they’re growing as larvae. And if that affects their survival, it’s bad news for the species themselves, and the species that eat them: Predators could well be bio-accumulating microplastics in their own bodies as they dine on tainted larvae, with as-yet-unknown consequences. And keep in mind that you and I are at the end of that food chain.
“I think this paper does a great job of illustrating that plastic and plankton and larval fish interact with the ocean currents the same way,” says oceanographer Jennifer Brandon, who studies microplastics at the Scripps Institution of Oceanography and who wasn’t involved in this new work. And there's no way to clean up that plastic without also capturing all that life, “because they're all concentrated in the same places.” Civilization’s addiction to plastic is out of control , and the reckoning has arrived.
The question now is figuring out just how badly we’ve already corrupted the vast ocean ecosystem.
Andrew Yang is not full of shit How measles leaves kids exposed to other diseases What's blockchain actually good for, anyway? For now, not much How to free up space in Gmail The untold story of Olympic Destroyer, the most deceptive hack in history 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones.
Staff Writer X Topics plastic Ramin Skibba Matt Simon Matt Simon Jorge Garay Phoebe Weston Matt Simon Matt Simon Brent M. Foster Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
130 | 2,020 | "Facebook Employees Take the Rare Step to Call Out Mark Zuckerberg | WIRED" | "https://www.wired.com/story/facebook-employees-rare-step-call-out-mark-zuckerberg" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Steven Levy Business Facebook Employees Take the Rare Step to Call Out Mark Zuckerberg Facebook CEO Mark Zuckerberg is defending his decision to not flag misleading posts from President Trump.
Photograph: Drew Angerer/Getty Images Save this story Save Save this story Save What happens when an immovable object meets a disgruntled workforce? We’re about to find out at Facebook. CEO Mark Zuckerberg has consistently refused to budge from allowing politicians—most conspicuously, Donald J. Trump—to post content that would violate the company’s rules against harm and misinformation. In dealing with recent Trump pronouncements promoting misinformation about voting and using the language of racism to encourage the shooting of protesters , Zuckerberg has chosen to leave posts (mostly cross-posted tweets) unfettered. Even Twitter, which previously gave Trump similar leeway, now warns users before they can see those Trump misrepresentations.
Now, a few Facebook employees have taken the rare step of speaking out publicly against their boss. “I'm a FB employee that completely disagrees with Mark's decision to do nothing about Trump's recent posts, which clearly incite violence. I’m not alone inside of FB,” tweeted Jason Stirman, an R&D executive who previously worked at Twitter and Medium. Another Facebook exec, Ryan Freitas, director of News Feed product design, wrote , “Mark is wrong, and I will endeavor in the loudest possible way to change his mind.” One engineer, Lauren Tan, tweeted , “Facebook’s inaction in taking down Trump’s post inciting violence makes me ashamed to work here.” Related Stories Words and Deeds Aarian Marshall and Arielle Pardes Surveillance Sidney Fussell 1033 program Brian Barrett Dissenting voices aren’t unusual in Facebook’s internal bulletin boards—which, according to reports, have recently been overflowing with frank complaints about Zuckerberg’s policy. But going public is a violation of what was once a near- omerta against criticizing Zuckerberg on the record. Even more striking, some Facebookers participated in a “virtual walkout” on Monday. (Storming out of headquarters isn’t an option, since nearly everyone at Facebook is working at home during the pandemic.) Zuckerberg noticed. He is moving up his end-of-the-week employee Q and A to Tuesday so he can respond. But will he listen to his workers and take down the posts? If history is a guide, the answer is no.
For one thing, Zuckerberg is famously stubborn. This is a life-long trait. When I interviewed his parents for my book about Facebook, they told me about Mark’s decision to leave the local public high school because it didn’t have enough computing resources and advanced classes. His family was happy to send him to a costly nearby private school, Horace Mann. But Mark had heard good things about Phillips Exeter Academy, a boarding school in New Hampshire. His mother was already losing one child that year—Mark’s sister Randy would be going to Harvard—and she didn’t want to see her only son leave the house, too. So she begged him to at least interview at Horace Mann. “I’ll do it,” he said. “But I’m going to Phillips Exeter.” And that’s what happened.
He runs his company that way too. The business is set up so that his voting shares give him a majority. And while he does seek the opinions of others, he has often chosen to override compelling objections to products and policies that turned out to be harmful and sometimes wrong. (Examples: the 2007 Beacon product that violated privacy by reporting user web purchases on the News Feed. Or Instant Personalization , which gave other websites private information about a user’s friends. That was the same privacy violation that led to Cambridge Analytica.
) By Paris Martineau In those cases, the dissent was kept private—even years later some of those describing it to me would not go on the record. Now the complaints are public, and Zuckerberg has to respond. He made a start on Friday with a long, tortured explanation of why he wouldn’t budge on keeping up Trump’s content. While admitting he struggled with the issues, he went into the weeds of policy to explain why this particular content managed to stay within the boundaries of acceptable Facebook speech. “These are difficult decisions and, just like today, the content we leave up I often find deeply offensive,” he wrote. “We try to think through all the consequences. People can agree or disagree on where we should draw the line, but I hope they understand our overall philosophy is that it is better to have this discussion out in the open, especially when the stakes are so high.” To satisfy his employees, he’ll have to do better than those Jesuitical contortions. In the context of Facebook’s overall policy—a complex set of rules drawn to allow the freest expression while excluding the most vile content like hate speech and porn—last week’s decisions might make sense. But it isn’t just those repurposed tweets that make Facebook employees ashamed. The company’s workers are responding to Facebook’s larger role in aggravating the nation’s troubled discord. In following Zuckerberg’s zeal to enable the widest possible expression, Facebook has hosted countless posts that may not violate its rules but have eroded public civility, providing a dog-whistle soundtrack to the intolerance that Zuckerberg admits is disgusting. It’s also getting harder to square the CEO’s professed neutrality in interpreting the rules with what seems like constant concessions to conservative forces. Not to mention unpublicized visits with the president himself.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Sooner or later, Zuckerberg has to deal with the larger issue of how Trump has been exploiting social media to spread the poison of division in the body politic. It is for that reason, and not a reposting of a tweet or two, that some of his employees are walking out, others say they are about to quit, and many more will turn down Facebook recruitment offers. And the problem will only get worse as Trump seems hell-bound to post ever more extreme pronouncements.
For now. Facebook says that employees who participate in the walkout will suffer no consequences. They won’t even be charged with a sick day. Even those who post on Twitter that “Mark is wrong” will not be sanctioned.
But will they force Mark Zuckerberg to do what he doesn’t want to do? If that happens it would truly be unprecedented.
Covid-19 will accelerate the AI health care revolution What is Clubhouse, and why does Silicon Valley care ? How to sleep when the world is falling apart Video-chat juries and the future of criminal justice 26 Animal Crossing tips to up your island game 👁 Is the brain a useful model for AI ? Plus: Get the latest AI news 💻 Upgrade your work game with our Gear team’s favorite laptops , keyboards , typing alternatives , and noise-canceling headphones Editor at Large X Topics Facebook Mark Zuckerberg Social Media protests Vittoria Elliott Will Knight David Gilbert Christopher Beam Will Knight Amanda Hoover Susan D'Agostino Reece Rogers Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
131 | 2,006 | "The Resurrection of Al Gore | WIRED" | "https://www.wired.com/2006/05/gore-2" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Karen Breslau The Resurrection of Al Gore Save this story Save Save this story Save One evening last December, in front of nearly 2,000 people at Stanford's Memorial Auditorium, Al Gore spoke in uncharacteristically personal and passionate terms about the failed quest that has dominated much of his adult life. Save for his standard warm-up line - "Hi, I'm Al Gore, and I used to be the next president of the United States" - there was hardly a mention of the White House. Instead, during the next 90 minutes, Gore had plenty to say about thinning polar ice caps, shrinking glaciers, rising carbon dioxide concentrations, spiking temperatures, and hundreds of other data points he has woven into an overpowering slide show detailing the catastrophic changes affecting the earth's climate. The audience was filled with Silicon Valley luminaries: Apple's Steve Jobs; Google's Larry Page and Eric Schmidt; Internet godfather Vint Cerf; Yahoo!'s Jerry Yang; venture capitalists John Doerr, Bill Draper, and Vinod Khosla; former Clinton administration defense secretary William Perry; and a cross section of CEOs, startup artists, techies, tinkerers, philanthropists, and investors of every political and ethnic stripe.
After the souped-up climatology lecture, a smaller crowd dined at the Schwab Center on campus. There, at tables topped with earth-shaped ice sculptures melting symbolically in the warmth of surrounding votive candles, guests mingled with Gore and his wife, Tipper, along with experts from Stanford's Woods Center for the Environment and the business-friendly Environmental Entrepreneurs. The goal: to enlist the assembled leaders in finding market-driven, technological solutions to global warming and then, in quintessential Silicon Valley style, to rapidly disseminate their ideas and change the world. "I need your help here," an emotional Gore pleaded at the end of the evening. "Working together, we can find the technologies and the political will to solve this problem." The crowd fell hard. "People were surprised," says Wendy Schmidt, who helped organize the event and, with her husband, Google CEO Eric Schmidt, supported Gore's 2000 presidential campaign. "They think of a slide show about science, they think of Al Gore. But they come out later and say, 'He's funny, he's passionate, he's real.'" Al Gore? Five and a half years after leaving the political stage, only the fourth man in US history to win the popular vote for president without being inaugurated, Gore has deftly remade himself from an object of pity into a fearless environmental crusader. The new Gore is bent on fixing what he calls the "climate crisis" through a combination of public awareness, federal action, and good old-fashioned capitalism. He's traveling the globe, delivering a slide show that, by his own estimate, he's given more than a thousand times over the years. His one-man campaign is chronicled in a new documentary, An Inconvenient Truth , which made Gore the unlikely darling of the Sundance Film Festival earlier this year and will be released on May 26 by Paramount Classics. He has also written a forthcoming companion volume of the same name, his first book on the subject since the 1992 campaign tome Earth in the Balance: Ecology and the Human Spirit.
Along the way, Gore has become a neo-green entrepreneur, taking his messianic faith in the power of technology to stop global warming and applying it to an ecofriendly investment firm. The company, Generation Investment Management, which he cofounded nearly two years ago, puts money into businesses that are positioned to capitalize on the carbon-constrained economy Gore and his partners see coming in the near future. All the while, he has been busy polishing his reputation as the ultimate wired citizen: Not far from the Stanford campus, Gore sits on the board of directors at Apple and serves as a senior adviser to Google. Farther up Highway 101 are the San Francisco headquarters of Current TV, the youth-oriented cable network he cofounded with legal entrepreneur Joel Hyatt.
For Gore, the private-sector ventures are all pieces of the same puzzle. He's challenging the power of the investment and media industries to decide what information matters most and how it ought to be distributed. "I find a lot of joy in the fact that these parts of my life post-politics have connected into what feels like a coherent whole, in ways that I didn't consciously plan," Gore told me at the Technology Entertainment Design conference in Monterey, California, where - again - he was the star attraction. "I think I'm very lucky." This is not, of course, the image of Al Gore stored in the nation's memory. He's been filed away as a tragic character who saw his victory hijacked by the Supreme Court. (In the film, he addresses the experience in a poignant passage: "That was a hard blow, but what do you do? You make the best of it.") How Gore has reengineered himself as a hero of the new green movement is a story known so far by only the relative few who have seen him in action lately. "You have a sense that this is the moment in his life, as though all the work he's been doing is now coming to a head," says film director Davis Guggenheim, who spent months traveling with Gore while shooting An Inconvenient Truth.
"City by city, as he gives this presentation, he is redeeming himself in a classically heroic way - someone who's been defeated and is lifting himself out of the ashes." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Al Gore's redemption begins aboard a sailboat in the Ionian Sea. There, in waters once traveled by Odysseus during his long journey home after the Trojan War, Al and Tipper retreated during the summer of 2001 to recover from their ordeal. In the months immediately following his searing loss, Gore had kept himself busy, teaching at several universities and working with Tipper on a book about the American family. The couple abandoned Washington and moved back to Nashville, Tennessee, where they had lived as newlyweds and where their older daughter, Karenna, was born. There they reconnected with old friends who had nothing to do with politics. "It was very healing," Tipper says. "We renewed ourselves." Though he still hadn't decided whether he would run for president in 2004, Gore felt it was "time to recede" from the public stage, she says, to spare himself - and the polarized public - an endless rehashing of the country's civic trauma.
That July and August, Al and Tipper vacationed at a seaside estate in Spain and then sailed along the Greek coast, trying to figure out what to do next. For the first time in his high-achieving life, the man who ran for president in 1988, at age 39, and who was a candidate in every national election since, had few demands on his time. Alone but for the boat's crew, he and Tipper spent their secluded days reading, exploring, and enjoying more than a few good meals. As usual when he was on vacation, Gore didn't bother to shave. On the morning they were due to return to the US, Tipper says, she walked into the bathroom and found Gore preparing for his end-of-vacation ritual, just as he had done countless times during his days as a US congressman, senator, and vice president. "I said, 'Al, you don't have a job to go back to. The beard is fun. Leave it.' He said, 'Oh yeah,' and put down his razor. And then we came back and everyone saw the beard and it was 'yada yada yada.'" When Gore hit US shores looking like a well-fed Grizzly Adams, the late-night comics lampooned him without mercy. The political talking heads puzzled endlessly about Gore's latest "makeover" and what signal he was trying to send. "It's not as if we were talking about Allen Ginsberg," Tipper told me, clearly amused by the image of her husband as a closet counterculturist. "It was just his way of saying he was free." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As Gore started traveling the country again, tentatively feeling out campaign donors and testing his political viability before select audiences, it soon became clear that his heart was no longer in the hunt. In late September 2001, Gore was scheduled to address an influential gathering of Democrats in Iowa. He had planned to signal his interest in the 2004 race. But after the September 11 attacks, he tore up the speech and instead called for national unity, offering a salute to President Bush as "my commander in chief." Gore rejects the notion that he had somehow lost his Democratic backbone in a spasm of post-9/11 patriotism. "I genuinely think he did a good job in the immediate aftermath of September 11 and up until Tora Bora," Gore told me, referring to the battle in Afghanistan in December 2001, when Osama bin Laden eluded US forces. "And especially up until the invasion of Iraq, I think, he did a good job. But then he blew it, in my opinion." Over the next few months, Gore turned away from politics, Tipper says, and shouldered as his "ministry" the campaign against global warming. He went back to work on the climate-change slide show he had been giving since he was a junior congressman in the late '70s. After earning little more than a government paycheck and book royalties for most of his career, he also started to make some serious money. Indulging his lifetime fascination with "information ecology," Gore took up an advisory post at Google in early 2001, three years before its blockbuster IPO. Later that year, he signed on with Metropolitan West Financial, a Los Angeles-based securities firm, as a rainmaker. In March 2003, he joined Apple's board of directors. The next year, Gore and a consortium of investors purchased a cable TV news network for a reported $70 million. Then he teamed up with David Blood, the former CEO of Goldman Sachs Asset Management, to form an investment fund based on the principles of sustainability. (The event was covered in the Financial Times under the irresistible headline "BLOOD AND GORE LAUNCH FIRM WITH A DIFFERENCE.") While the political press remained obsessed with Gore the loser (underlined by his ill-timed endorsement of Howard Dean right before the candidate tanked), by 2004 Gore the neophyte businessman had built an impressive second act around his twin passions: technology and the environment. "His new work leverages what he's really good at, which is thinking deeply about the drivers of change and having a perspective on where companies need to go in a global business environment," says Peter Knight, a longtime friend and adviser who is one of Gore's partners at Generation. "This turns out to be a wonderful convergence of his abilities and interests." Along with his bank account, the transition from public to private sector has also buoyed Gore's wounded spirit. "This is the Al that I've known since we were teenagers," Tipper says. "How does that Joni Mitchell song go? 'I was a free man in Paris, I felt unfettered and alive.' That's him." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg When Gore and I meet, it is, alas, not in Paris but at the St. Regis Hotel in San Francisco, where he and Tipper recently purchased a pied-à-terre. Gore is dressed in his new uniform, looking very GQ in well-tailored trousers and a charcoal silk shirt, open at the collar. He's chucked the Brylcreem; his hair is modishly parted and flops on his forehead. At 58, he looks younger (though considerably heavier) than he did a few years ago. Earlier in the week, Gore had returned from a grueling lecture tour of Tokyo, Manila, Mumbai, and Jiddah, where he gave a speech accusing the Bush administration of "terrible abuses" against Arabs after the September 11 attacks. Gore knew he would be pilloried for criticizing Bush on foreign soil, though he never could have predicted that a trigger-happy Dick Cheney would have blasted him, as it were, out of the headlines that week with even worse vice presidential news. As he pops a beer and sprawls on a sleek leather lounger, Gore chortles at Cheney's predicament.
I ask him how his ventures in cable television and sustainable investing are supposed to fit together. Gore responds with a typically long and sometimes philosophical filibuster that eventually circles back to the question. Central to Gore's philosophy are two inextricable beliefs: first, that "the world is facing a planetary emergency, a climate crisis that is without precedent in all of human history." Second, that "the conversation of democracy is broken." Fix the latter, Gore argues, and the chances of remedying the former improve dramatically.
One reason Gore remains enthusiastic about his cable venture, Current TV, despite its startup pains and anemic reviews, is that he sees his fledgling network as busting the access monopoly that broadcast and cable outlets have held since television began. "If you want to be Thomas Paine in the information age," says Gore, "what do you do? You go to a studio, and then you can play a bit part in making a show about people who eat bugs. The barriers to entry are impossibly high." Current TV, which already seems hopelessly overtaken by the proliferation of video-sharing Web sites like Google Video and YouTube (see " The Wired Guide to the Online Video Explosion "), was conceived to give the audience the power to decide what should be carried on the network. Programming consists largely of short videos submitted by its young viewers, giving the channel the disjointed flavor of home-movie night in the dorm: A report on rebuilding with green materials in the wake of Hurricane Katrina might be followed by a clip on cockfighting in Puerto Rico and another featuring bikini-clad meter maids in Australia. Make what you will of viewers' tastes; Gore says Current TV is the answer to a crucial social challenge: How do you open up the public dialog to individuals who are shut out of television? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For all the early hype surrounding Current TV, the commercial venture that excites Gore most these days is Generation Investment Management, his global fund. As governments begin imposing carbon caps on businesses, Gore says, free markets will reward companies that practice environmental sustainability. The result: reductions in emissions of carbon dioxide and other greenhouse gases responsible for global warming. "As soon as business leaders get global warming or the environment at large," he says, "they start seeing profit opportunities all over the place. There is so much low-hanging fruit right now, it's just ridiculous." So much, in fact, that early this year venture capitalist Doerr announced that his firm, Kleiner Perkins Caufield & Byers, would launch a $100 million green-technology fund. "Greentech could be the largest economic opportunity of the 21st century," he said.
Though Generation invests in a wide range of companies, Gore and his team are especially bullish on the energy sector. We're on the verge of "a real gold rush" in renewables, conservation, and software for identifying and eliminating waste, he says. "The whole economy is going to shift into a much more granular analysis of which matter is used for what, which streams of energy are used for what. Where does it come from? Where does it go? Why are we now wasting more than 90 percent of it?" Gore shakes his head. "The investments in doing it right are not costs - they're profits." Make no mistake: Generation's strategy is to beat the market, not just to feel good about socially responsible investing. Gore's partner at the firm, David Blood, is a legend in the London investment scene. He retired as CEO of Goldman Sachs Asset Management in 2003, at age 44, after helping grow its assets from $50 billion to $325 billion in just seven years. He, too, was casting about for a way to incorporate environmental and social values into traditional investment analysis. The concept wasn't an easy sell on Wall Street. "As soon as you say 'sustainability,' some people will roll their eyes and say, 'These guys are tree huggers and they run around in sandals and they aren't serious investors,'" says Blood. "But once they listen, there is no one who says this doesn't make sense." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Gore is fond of citing a maxim from psychologist Abraham Maslow: "If the only tool you have is a hammer, you tend to see every problem as a nail." The same principle, he says, applies to investing. "If the only tool you have for measuring value is a quarterly financial report or a price tag, then everything that is excluded from that report or that comes without a price tag begins to look like it has no value." Solving the climate crisis, Gore says, will require a new set of market signals for investors. "The precision with which labor and capital are measured and accounted for is in one category. The precision with which nature is tracked and depreciated and cared for is something else again." Gore compares the voluminous but incomplete information that investors get to the intelligence briefings he used to receive each morning at the White House. "These satellites are just parked out there, grabbing signals from all across the electromagnetic spectrum." But without bringing to bear his own human intelligence, incorporating information from elsewhere on the "spectrum of value," the top secret satellite data would have made little sense. "Now, in the same way, if you rely on financial reports that are constructed without regard to environmental factors, you're excluding a lot," he says. "When you look at other parts of the spectrum of value, you get important information that's directly relevant to the sustainable value of the company." As an example, Gore cites a Generation report on the auto industry. Researchers analyzed traditional metrics, including sales and labor costs, but they also looked at the degree to which profits depended on high carbon output. Two years before it became clear how badly General Motors and Ford were performing, the Generation team calculated that Toyota, a more carbon-conscious company with better labor relations, would gain a $1,500 advantage per vehicle as government-mandated fuel efficiency and carbon emission standards come into effect. GM's reliance on gas-guzzling SUVs made money in the short term. But the company's inability (or refusal) to position itself ahead of the coming carbon-regulation regime economy was a barometer of poor strategic thinking.
Generation likes to use this sort of nontraditional analysis. When considering an investment in an energy company with operations in the Rocky Mountain area, for instance, fund analysts looked to community blogs, where they found considerable local opposition to the company's strategy. "That business plan had a huge vulnerability that was outside the scope of its financial reports," Gore says. "I often say, 'It's really just common sense.' But common sense is not as common as it should be. Our whole mission is to make it mainstream." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Gore and his Generation partners base their investments on long-term research, looking ahead up to five years, and they have agreed not to take any profits themselves until three years into any investment. The firm began investing client money in April 2005 (it now manages around $200 million in assets), and Gore, while declining to give specific figures, says the returns thus far have been "really gratifying, I mean really exciting." Initial investments include companies involved in photovoltaics, wind turbines, wave energy, and solar power. The firm put money into BP, betting on its new power plant in Scotland that injects carbon emissions back into the ground. It's the kind of technology Generation sees as having a competitive advantage in a carbon-constrained economy.
Generation's overriding goal, of course, is to make money for its investors. But Gore and his partners also believe the firm can help innovative businesses attract even more funding. The idea is to draw capital away from the fossil-fueled economy and direct it toward new and profitable centers of the sustainable economy. "We're trying to get Wall Street to wake up," says Colin le Duc, who heads Generation's London-based research team. "I want to be able to sit there with the hardest-nosed, most skeptical investment fund manager in New York and say, 'We beat the market by 20 percent, and you can, too.'" The Gores and all the employees of Generation lead a "carbon-neutral" lifestyle, reducing their energy consumption when possible and purchasing so-called offsets available on newly emerging carbon markets. Gore says he and Tipper regularly calculate their home and business energy use - including the carbon cost of his prodigious global travel. Then he purchases offsets equal to the amount of carbon emissions they generate. Last year, for example, Gore and Tipper atoned for their estimated 1 million miles in global air travel by giving money to an Indian solar electric company and a Bulgarian hydroelectric project.
Carbon offsets are still an imperfect tool, favored only by a few early adopters. ( An Inconvenient Truth directs viewers to a personal carbon calculator posted at www.climatecrisis.net.
) Gore acknowledges that the average US consumer isn't likely to join what is, for now, essentially a voluntary taxation system. "The real answer is going to come in the marketplace," he says. "When the capitalist market system starts working for us instead of at cross-purposes, then the economy will start pushing inexorably toward lower and lower levels of pollution and higher and higher levels of efficiency. The main thing that's needed is to get the information flows right, removing the distortions and paying attention to the incentives." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It is Friday afternoon at the TED conference in Monterey, California, the annual four-day, four-star schmooze-fest of the tech and design elite. Motivational speaker Tony Robbins is onstage, asking the TEDsters for reasons people commonly give when they fail. The answers are mostly predictable: bad management, not enough money, lack of time. Then Gore, who's sitting just a few feet from the stage, shouts, "The Supreme Court." Everyone roars with laughter. Robbins wheels on Gore. If he'd shown more passion, Robbins chides, "you'd have kicked his ass and won!" Everyone, Gore included, roars again - but the point is taken.
These days, Gore speaks with a verve and conviction that were often sorely absent during his political days. From time to time, he fires broadsides at the Bush administration - for its warrantless domestic wiretapping program, for the interrogation methods used against al Qaeda suspects rounded up in Iraq and Afghanistan - usually, he says, "when I get to the point where I can't stand not making a speech and unburdening myself." But most of Gore's public energies are directed toward his campaign against global warming, which he, like Tipper, describes in evangelical terms as "my mission." As vice president, and then as a candidate for president, Gore enjoyed a retinue of advisers, Secret Service agents, schedulers, and speechwriters. Save for one harried, full-time assistant, that's all gone now, a change that Gore seems to relish. On New Year's Eve 2005, he was home in Nashville with Tipper, hunched behind two 30-inch hi-def Apple displays, trying to finish his book on climate change. As he completed a page, Tipper would grab it from the printer and cram it into a three-ring binder. Finally, at 10:30 pm, the manuscript was finished, and Tipper raced down the driveway to hand it to a waiting courier. "I told Al, 'This is just the way it was when we started,'" she says, recounting the story for me without a shred of pathos. "'Just the two of us.'" These glimpses into what, for years, has been zealously guarded privacy are Gore's way of letting the world know that he has adapted quite comfortably to his life after politics. The inevitable queries about whether he plans to run again are batted aside with another one-liner: "I like to think of myself as a recovering politician. I'm on about step nine." During the question-and-answer session following his climate lecture at TED, Gore confesses, "I wasn't a very good politician." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "Well, you won!" someone shouts from the audience.
"Oh, well," Gore deadpans in a Saturday Night Live imitation of himself. "There is that." Since his defeat in 2000, Gore has developed an impressive arsenal of self-deprecating ripostes to protect himself against misplaced pity. "The elephant in the room is always, How does he feel about the election?" film director Davis Guggenheim says. "You kind of suspect this guy is pissed off and dug in. And what he's saying right off is 'I've moved on, and I want you to move on with me. I need you to laugh about it, too.' And then he gets them to listen to what they need to hear." At TED, before offering his remedies for global warming, Gore acknowledges the elephant with a wicked stand-up routine - punctuated by faux crying jags - about the indignities of leaving public office. His shtick includes having to explain to Bill Clinton an erroneous Nigerian wire service report that he and Tipper had decided to open a chain of Shoney's eateries (prompting a letter of congratulations from the former president) and the "phantom limb" pain he feels when he looks in the rearview mirror and doesn't see his motorcade.
Reality is just as funny. Last year, while traveling on business, Gore stopped at a restaurant. A woman kept walking slowly past his booth to stare. Finally she stopped. "You know, if you dyed your hair black, you'd look just like Al Gore," she said.
"Why, thank you, ma'am," Gore, ever the straight man, responded.
"And your imitation of him is pretty good, too," she said.
This spring marks a coming-out of sorts for Gore, no longer a candidate for anything, but campaigning nonetheless to change American attitudes about global warming. Gore says he will channel earnings from his upcoming book and movie into a "mass persuasion" offensive. Together with An Inconvenient Truth producer Laurie David and a coalition of major environmental, business, labor, and religious groups, Gore wants to make climate crisis a household phrase. They plan a three-pronged Internet, television, and print advertising campaign to provoke wide-reaching changes in consumer and business behavior and to force shifts in government policy. He'll bring an army of surrogate speakers to Nashville, where he and Tipper will equip them with the slide show and train them to deliver the lecture.
During the opening sequence of the documentary, Gore confesses ruefully: "I've been trying to tell this story for a long time, and I feel as if I have failed to get the message across." For Al Gore, it's the race of his life.
Karen Breslau ( kbreslau@yahoo.com ) is San Francisco bureau chief for Newsweek.
Al Gore credit Martin Schoeller Feature: > The Resurrection of Al Gore Plus: > Citizen Gore Grading the Old Guard Topics magazine-14.05 Gregory Barber Ramin Skibba Matt Simon Matt Kamen Matt Simon Amit Katwala Angela Watercutter Jennifer M. Wood Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
132 | 2,020 | "The Legacy of Math Luminary John Conway, Lost to Covid-19 | WIRED" | "https://www.wired.com/story/the-legacy-of-math-luminary-john-conway-lost-to-covid-19" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Kevin Hartnett Science The Legacy of Math Luminary John Conway, Lost to Covid-19 Photograph: Dith Pran/New York Times/Redux Save this story Save Save this story Save In modern mathematics, many of the biggest advances are great elaborations of theory. Mathematicians move mountains, but their strength comes from tools, highly sophisticated abstractions that can act like a robotic glove, enhancing the wearer’s strength. John Conway was a throwback, a natural problem-solver whose unassisted feats often left his colleagues stunned.
Original story reprinted with permission from Quanta Magazine , an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
“Every top mathematician was in awe of his strength. People said he was the only mathematician who could do things with his own bare hands,” said Stephen Miller, a mathematician at Rutgers University. “Mathematically, he was the strongest there was.” On April 11, Conway died of Covid-19. The Liverpool, England, native was 82.
Conway’s contributions to mathematics were as varied as the stories people tell about him.
“Once he shook my hand and informed me that I was four handshakes away from Napoleon, the chain being: [me]—John Conway—Bertrand Russell—Lord John Russell–Napoleon,” said his Princeton University colleague David Gabai over email. Then there was the time Conway and one of his closest friends at Princeton, the mathematician Simon Kochen, decided to memorize the world capitals on a whim. “We decided to drop the mathematics for a while,” Kochen said, “and for a few weeks we’d go home and do, like, the western bulge of Africa or the Caribbean nations.” Conway had the tendency—perhaps unparalleled among his peers—of jumping into an area of mathematics and completely changing it.
“A lot of the objects he studied are thought of by other mathematicians the way that he thought of them,” Miller said. “It’s as if his personality has been superimposed on them.” Conway’s first big discovery was an act of self-preservation. In the mid-1960s he was a young mathematician looking to launch his career. On the recommendation of John McKay, he decided to try to prove something about the properties of a sprawling geometric object called the Leech lattice. It comes up in the study of the most efficient way to pack as many round objects in as little space as possible—an enterprise known as sphere packing.
To get a sense of what the Leech lattice is and why it’s important, first consider a simpler scenario. Imagine you wanted to fit as many circles as possible into a region of the standard Euclidean plane. You can do this by dividing the plane into one big hexagonal grid and circumscribing the largest possible circle inside each hexagon. The grid, called a hexagonal lattice, serves as an exact guide for the best way to pack circles in two-dimensional space.
In the 1960s, the mathematician John Leech came up with a different kind of lattice that he predicted would serve as a guide for the most efficient packing of 24-dimensional spheres in 24-dimensional space. (It later proved true.) This application to sphere packing made the Leech lattice interesting, but there were still many unknowns. Chief among them were the lattice’s symmetries, which can be collected into an object called a “group.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In 1966, at McKay’s urging, Conway decided that he would discover the symmetry group of the Leech lattice, no matter how long it took.
“He sort of shut himself up in this room and said goodbye to his wife, and was [planning] to work all day every day for a year,” said Richard Borcherds, a mathematician at the University of California, Berkeley, and a former student of Conway’s.
But, as it turned out, the farewell was unnecessary. “He managed to calculate it in about 24 hours,” Borcherds said.
Rapid computation was one of Conway’s signature traits. It was a form of recreation for him. He devised an algorithm for quickly determining the day of the week for any date, past or future, and enjoyed inventing and playing games.
He’s perhaps best known for creating the “Game of Life,” a mesmerizing computer program in which collections of cells evolve into new configurations based on a few simple rules.
After discovering the symmetries of the Leech lattice—a collection now known as the Conway group—Conway became interested in the properties of other similar groups. One of these was the aptly named “monster” group, a collection of symmetries that appear in 196,883-dimensional space.
In a 1979 paper called “ Monstrous Moonshine ,” Conway and Simon Norton conjectured a deep and surprising relationship between properties of the monster group and properties of a distant object in number theory called the j-function. They predicted that the dimensions in which the monster group operates match, almost exactly, the coefficients of the j-function. A decade later, Borcherds proved Conway and Norton’s “moonshine” conjecture, helping him win a Fields Medal in 1998.
Without Conway’s facility for computation and taste for grappling with examples, he and Norton might not even have thought to conjecture the moonshine relationship.
“In doing these examples they discovered this numerology,” Miller said. “[Conway] did it from the ground up; he didn’t come in with some magic wand. When he understood something, he understood it as well as anyone else did, and usually did it in his own unique way.” Nine years before moonshine, Conway’s style of hands-on mathematics led him to a breakthrough in an entirely different area. In the field of topology, mathematicians study the properties of knots, which are like closed loops of string. Mathematicians are interested in classifying all types of knots. For example, if you attach the ends of an unknotted shoelace you get one type of knot. If you tie an overhand knot in the shoelace and then connect the ends, you get another.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But it’s not always that simple. If you take two closed loops and jumble each of them, the way a cat might play with a piece of string, you won’t necessarily be able to tell at a glance—even a long glance—whether or not they’re the same knot.
In the 19th century, a trio of British and American scientists—Thomas Kirkman, Charles Little and Peter Tait—labored to create a kind of periodic table of knots. Over the course of six years they classified the first 54 knots.
Conway, in a 1970 paper, came up with a more efficient way of doing the same job. His description—known as Conway notation—made it much easier to diagram the tangles and overlaps in a knot.
“What Little did in six years, it took him an afternoon,” said Marc Lackenby, a mathematician at the University of Oxford who studies knot theory.
And that wasn’t all. In the same paper, Conway made another major contribution to knot theory. Mathematicians studying knots have different types of tests they apply, which typically act as invariants, meaning that if the results come out as different for two knots, then the knots are different.
One of the most venerable tests in knot theory is the Alexander polynomial—a polynomial expression that’s based on the way a given knot crosses over itself. It’s a highly effective test, but it’s also slightly ambiguous. The same knot could yield multiple different (but very closely related) Alexander polynomials.
Conway managed to refine the Alexander polynomial, ironing out the ambiguity. The result was the invention of the Conway polynomial, which is now a basic tool learned by every knot theorist.
“He’s famous for coming in and doing things his own way. He definitely did that with knots, and it had a lasting influence,” Lackenby said.
Conway was an active researcher and a fixture in the Princeton math department common room well into his 70s. A major stroke two years ago, however, consigned him to a nursing home. His former colleagues, including Kochen, saw him there regularly until the Covid-19 pandemic made such visits impossible. Kochen continued to talk to him on the phone through the winter, including a final conversation about two weeks before Conway died.
“He didn’t like the fact that he couldn’t get any visitors, and he talked about that damn virus. And in fact, that damn virus did get him,” Kochen said.
Original story reprinted with permission from Quanta Magazine , an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
To run my best marathon at age 44, I had to outrun my past Amazon workers describe daily risks in a pandemic Stephen Wolfram invites you to solve physics Clever cryptography could protect privacy in contact-tracing apps Everything you need to work from home like a pro 👁 AI uncovers a potential Covid-19 treatment.
Plus: Get the latest AI news 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Topics Quanta Magazine Amit Katwala Grace Browne Matt Simon Dell Cameron Max G. Levy Dhruv Mehrotra Max G. Levy Ramin Skibba Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
133 | 2,016 | "The Panama Papers and the Monster Stories of the Future | The New Yorker" | "https://www.newyorker.com/news/news-desk/the-panama-papers-and-the-monster-stories-of-the-future" | "Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert News Desk The Panama Papers and the Monster Stories of the Future By Nicholas Lemann The Panama Papers began in the old-fashioned way: a leaker contacted the Süddeutsche Zeitung, in Munich, eventually offering up millions of documents from the secret files of the Mossack Fonseca law firm.
Photograph by CHRISTOF STACHE / AFP / Getty Save this story Save this story Save this story Save this story The movie “Spotlight,” which for many journalists provided a jolt of pure gratification, follows the canonical story line for news-biz triumphs. A determined team at a major-league newspaper, led by a brave and supportive editor, is permitted to spend months relentlessly chasing down a major story. Sources help, of course, but they need to be persuaded and verified, and there is much more to the work than simply receiving material. Finally, after many setbacks that would have daunted ordinary mortals, the team fits all the pieces together. The presses roll. Justice is done. Nobody but a big news organization could have accomplished this.
A lifetime ago, the Watergate and Pentagon Papers stories, at least as told by journalists, went this way, and more recently the WikiLeaks and Edward Snowden stories, if you squinted, could look as if they did, too. There were renowned, heroic papers involved—the Guardian , the Times , the Washington Post —and their involvement seemed to be essential to the large effects of the revelations. What’s unusual about the monster story of the moment, the Panama Papers , at least in the United States, is that it lacks one lead actor, which usually has been an organization from the top rank of the journalism establishment. The coördinator of the coverage is the International Consortium of Investigative Journalists, a nineteen-year-old subsidiary of a nonprofit news organization in Washington called the Center for Public Integrity. The I.C.I.J. has only eleven full-time employees. The heart of their work, in this and other cases, was not “doing the story” by themselves but organizing an international network that took on the project, with all the parties agreeing to abide by a single deadline and to share credit. There were a hundred and seven media partners, some large (the BBC), some tiny (Inkyfada, in Tunisia). The _Time_s, the Washington Post , the Wall Street Journal , and the big American broadcast networks are notably absent from the list.
The Times ’ s public editor, Margaret Sullivan, wrote an uncharacteristically hazy article on why the Times did not participate in the consortium, and why it did not initially treat the Panama Papers as front-page news. She quoted Dean Baquet, the paper’s executive editor, saying that he could not recall the details of the Times ’ s past dealings with the I.C.I.J., but “I remember one talk when I was managing editor, and was worried about a story that involved many news organizations. But that wasn’t this cache.” Marina Walker Guevara, the deputy director of the I.C.I.J., has also been cryptic in her public comments about this—saying, on the organization’s site, that it chooses as partners only “Journalists who are team players and are willing to share their work with other colleagues around the world.” There is a more candid explanation in a paper that Bill Buzenberg, the former head of the Center for Public Integrity, wrote as a fellow at Harvard last year: “Other U.S. news organizations, most notably The New York Times and The Wall Street Journal , have often declined to collaborate, seeking exclusivity, or preferring to write their own stories about ICIJ’s results, after the fact, rather than join in the long, slow collaborative process leading up to publishing at an agreed-upon time and date.” Here’s what actually happened. The I.C.I.J. talked to the Times and at least several other major American news outlets about joining the teams associated with other major leaks, prior to the Panama Papers story. These talks did not go well, according to sources that were not authorized to speak on the matter, because the big-dog news organizations did not want to abide by the I.C.I.J.’s condition that they operate as co-equal members of a large team. So when the Panama Papers came along, the I.C.I.J. didn’t even bother pitching the Times and the other papers on joining the consortium. (The Times did publish a story based on I.C.I.J.-generated documents in 2013, but not as part of an I.C.I.J.-organized consortium.) Baquet offered me an explanation for the Times ’ s decisions: “What people forget is that everybody has to agree on what’s a story. The logistics are really tricky. Let’s say some document has not enough proof for me, but enough for another news organization—or vice versa. How do you manage that? It’s not as easy as you think. It’s not just ‘Go!’ It’s really difficult.” Baquet pointed out that the Times worked successfully with a limited number of partners—the Guardian , Der Spiegel , and Julian Assange himself—on the WikiLeaks story, and he said that if the I.C.I.J. had asked him to have the Times take part in the Panama Papers story, he would have agreed to join the consortium. “If they’d come to me and offered the level of detail that they had, I’d have swallowed and participated. If the story’s big enough, absolutely I would have participated.” Jill Abramson, Baquet’s predecessor and the executive editor during some of the past negotiations between the Times and the I.C.I.J., added, in an email, “Journalism is becoming more collaborative all the time, even within a single news organization. It takes a team to pull together any big project.” The absence of the Times and the others from the Panama Papers story might have been a one-time happenstance, but the lack of leading establishment players also might be a sign that the way journalism functions is changing. That’s what Bill Buzenberg appears to think. Toward the end of his Harvard paper, he added this little dig: “The attitude that ‘we know best’ and ‘we do it all ourselves’ is an increasingly antiquated notion in the digital age when knowledgeable members of the public and colleagues at other news organizations could be brought into an effective journalistic process in new ways to become part of a more robust collaborative investigative effort.” If Buzenberg is right, then the Panama Papers is the latest important piece of evidence in support of the notion that, in every realm, the way work gets done is shifting from big institutions to loose networks. It may be, though, that the I.C.I.J. model is merely a phase in a progression toward an even more radically distributed way of breaking monster stories, one that would not involve journalists at all. Mark Felt, the F.B.I. official who was Watergate’s Deep Throat, merely gave cryptic spoken clues to Bob Woodward—he absolutely depended on journalism to get the story out. Daniel Ellsberg, the leaker of the Pentagon Papers, had the ability to photocopy his material but not to publish it, so he needed journalism, too. Julian Assange and Edward Snowden could (and in Assange’s case, did) self-publish their purloined data troves electronically. They decided to seek partners in the mainstream media in order to get more attention, and to take advantage of American legal protections that made it difficult for the government to prevent publication.
The Panama Papers began in the old-fashioned way: a leaker contacted a traditional newspaper, the Süddeutsche Zeitung _,_ in Munich, eventually offering up more than eleven million documents from the secret files of the Mossack Fonseca law firm in Panama. The paper didn’t have the ability to go through that much raw data on its own, so it approached the I.C.I.J., which activated the consortium. What if, in the future, a leaker simply self-publishes, and asks the crowd to make sense of the material on an independent Web site or a social-media platform? The I.C.I.J. has three computer programmers turned data journalists on its small payroll who give it a better ability to make sense of a great mass of material quickly than most news organization have. What if a future data file is so enormous that significant numbers of high-end computer scientists who are expert in the more recondite realms of machine learning are better suited to find the news in it than anybody a journalism organization could afford to employ? Whether it involves big organizations or online networks, the sort of journalism narrative that turns on reporters and editors acting as intermediaries between a leaker and the public may turn out to have been just a phase in the history of the profession.
Daily E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
John Cassidy By John Cassidy News Desk By Jon Lee Anderson Daily Comment By Steve Coll Critics at Large Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q.
Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info
" |
134 | 2,023 | "A Small-Town Paper Lands a Very Big Story | The New Yorker" | "https://www.newyorker.com/magazine/2023/07/31/a-small-town-paper-lands-a-very-big-story" | "Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert U.S. Journal A Small-Town Paper Lands a Very Big Story By Paige Williams Facebook X Email Print Save Story As the drama unfolded, a staffer at the Gazette said, “I cursed our lives by deciding to move here.” Photographs by Joseph Rushmore for The New Yorker Save this story Save this story Save this story Save this story Listen to this article.
Bruce Willingham, fifty-two years a newspaperman, owns and publishes the McCurtain Gazette, in McCurtain County, Oklahoma, a rolling sweep of timber and lakes that forms the southeastern corner of the state. McCurtain County is geographically larger than Rhode Island and less populous than the average Taylor Swift concert. Thirty-one thousand people live there; forty-four hundred buy the Gazette , which has been in print since 1905, before statehood. At that time, the paper was known as the Idabel Signal , referring to the county seat. An early masthead proclaimed “ INDIAN TERRITORY, CHOCTAW NATION.
” Willingham bought the newspaper in 1988, with his wife, Gwen, who gave up a nursing career to become the Gazette’s accountant. They operate out of a storefront office in downtown Idabel, between a package-shipping business and a pawnshop. The staff parks out back, within sight of an old Frisco railway station, and enters through the “morgue,” where the bound archives are kept. Until recently, no one had reason to lock the door during the day.
Three days a week (five, before the pandemic), readers can find the latest on rodeo queens, school cafeteria menus, hardwood-mill closings, heat advisories. Some headlines: “Large Cat Sighted in Idabel,” “Two of State’s Three Master Bladesmiths Live Here,” “Local Singing Group Enjoys Tuesdays.” Anyone who’s been cited for speeding, charged with a misdemeanor, applied for a marriage license, or filed for divorce will see his or her name listed in the “District Court Report.” In Willingham’s clutterbucket of an office, a hulking microfiche machine sits alongside his desktop computer amid lunar levels of dust; he uses the machine to unearth and reprint front pages from long ago. In 2017, he transported readers to 1934 via a banner headline: “ NEGRO SLAYER OF WHITE MAN KILLED.
” The area has long been stuck with the nickname Little Dixie.
Gazette articles can be shorter than recipes, and what they may lack in detail, context, and occasionally accuracy, they make up for by existing at all. The paper does more than probe the past or keep tabs on the local felines. “We’ve investigated county officials a lot ,” Willingham, who is sixty-eight, said the other day. The Gazette exposed a county treasurer who allowed elected officials to avoid penalties for paying their property taxes late, and a utilities company that gouged poor customers while lavishing its executives with gifts. “To most people, it’s Mickey Mouse stuff,” Willingham told me. “But the problem is, if you let them get away with it, it gets worse and worse and worse.” “And how long till they start saying ‘the Great’ after my name?” Cartoon by Maddie Dai Copy link to cartoon Copy link to cartoon Link copied Shop Shop The Willinghams’ oldest son, Chris, and his wife, Angie, work at the Gazette , too. They moved to Idabel from Oklahoma City in the spring of 2005, not long after graduating from college. Angie became an editor, and Chris covered what is known in the daily-news business as cops and courts. Absurdity often made the front page—a five-m.p.h. police “chase” through town, a wayward snake. Three times in one year, the paper wrote about assaults in which the weapon was chicken and dumplings. McCurtain County, which once led the state in homicides, also produces more sinister blotter items: a man cashed his dead mother’s Social Security checks for more than a year; a man killed a woman with a hunting bow and two arrows; a man raped a woman in front of her baby.
In a small town, a dogged reporter is inevitably an unpopular one. It isn’t easy to write about an old friend’s felony drug charge, knowing that you’re going to see him at church. When Chris was a teen-ager, his father twice put him in the paper, for the misdemeanors of stealing beer, with buddies, at a grocery store where one of them worked, and parking illegally—probably with those same buddies, definitely with beer—on a back-road bridge, over a good fishing hole.
Chris has a wired earnestness and a voice that carries. Listening to a crime victim’s story, he might boom, “ Gollll-ly! ” Among law-enforcement sources, “Chris was respected because he always asked questions about how the system works, about proper procedure,” an officer said. Certain cops admired his willingness to pursue uncomfortable truths even if those truths involved one of their own. “If I was to do something wrong—on purpose, on accident—Chris Willingham one hundred per cent would write my butt in the paper, on the front page, in bold letters,” another officer, who has known him for more than a decade, told me.
In the summer of 2021, Chris heard that there were morale problems within the McCurtain County Sheriff’s Office. The sheriff, Kevin Clardy, who has woolly eyebrows and a mustache, and often wears a cowboy hat, had just started his second term. The first one had gone smoothly, but now, according to some colleagues, Clardy appeared to be playing favorites.
The current discord stemmed from two recent promotions. Clardy had brought in Larry Hendrix, a former deputy from another county, and, despite what some considered to be weak investigative skills, elevated him to undersheriff—second-in-command. Clardy had also hired Alicia Manning, who had taken up law enforcement only recently, in her forties. Rookies typically start out on patrol, but Clardy made Manning an investigator. Then he named her captain, a newly created position, from which she oversaw the department’s two dozen or so deputies and managed cases involving violence against women and children. Co-workers were dismayed to see someone with so little experience rise that quickly to the third most powerful rank. “Never patrolled one night, never patrolled one day, in any law-enforcement aspect, anywhere in her life, and you’re gonna bring her in and stick her in high crimes?” one officer who worked with her told me.
Chris was sitting on a tip that Clardy favored Manning because the two were having an affair. Then, around Thanksgiving, 2021, employees at the county jail, whose board is chaired by the sheriff, started getting fired, and quitting. The first to go was the jail’s secretary, who had worked there for twenty-six years. The jail’s administrator resigned on the spot rather than carry out the termination; the secretary’s husband, the jail’s longtime handyman, quit, too. When Chris interviewed Clardy about the unusual spate of departures, the sheriff pointed out that employment in Oklahoma is at will. “It is what it is,” he said. In response to a question about nepotism, involving the temporary promotion of his stepdaughter’s husband, Clardy revealed that he had been divorced for a few months and separated for more than a year. Chris asked, “Are you and Alicia having sex?” Clardy repeatedly said no, insisting, “We’re good friends. Me and Larry’s good friends, but I’m not having sex with Larry, either.” Meanwhile, someone had sent Chris photographs of the department’s evidence room, which resembled a hoarder’s nest. The mess invited speculation about tainted case material. In a front-page story, branded “first of a series,” the Gazette printed the images, along with the news that Hendrix and Manning were warning deputies to stop all the “backdoor talk.” The sheriff told staffers that anyone who spoke to the Gazette would be fired.
Manning has thick, ash-streaked hair, a direct manner, and what seems to be an unwavering loyalty to Clardy. She offered to help him flush out the leakers, and told another colleague that she wanted to obtain search warrants for cell phones belonging to deputies. When Chris heard that Manning wanted to confiscate his phone, he called the Oklahoma Press Association—and a lawyer. (Oklahoma has a shield law, passed in the seventies, which is designed to protect journalists’ sources.) The lawyer advised Chris to leave his phone behind whenever he went to the sheriff’s department. Angie was prepared to remotely wipe the device if Chris ever lost possession of it.
John Jones, a narcotics detective in his late twenties, cautioned Manning against abusing her authority. Jones was the sheriff’s most prolific investigator, regarded as a forthright and talented young officer—a “velociraptor,” according to one peer. He had documented the presence of the Sinaloa cartel in McCurtain County, describing meth smuggled from Mexico in shipments of pencils, and cash laundered through local casinos. Jones had filed hundreds of cases between 2019 and most of 2021, compared with a couple of dozen by Manning and Hendrix combined. The Gazette reported that, on December 1st—days after confronting Manning—Jones was bumped down to patrol. The next day, he quit.
In the summer of 2021, the Gazette got a tip about morale problems at the county sheriff’s department. Then employees started getting fired, and quitting.
“If I was to do something wrong,” one law-enforcement officer said, “Chris Willingham one hundred per cent would write my butt in the paper.” Within the week, Hendrix fired the department’s second most productive investigator, Devin Black. An experienced detective in his late thirties, Black had just recovered nearly a million dollars’ worth of stolen tractors and construction equipment, a big deal in a county whose economy depends on agriculture and tourism. (At Broken Bow Lake, north of Idabel, newcomers are building hundreds of luxury cabins in Hochatown, a resort area known as the Hamptons of Dallas-Fort Worth.) Black said nothing publicly after his departure, but Jones published an open letter in the Gazette , accusing Hendrix of neglecting the case of a woman who said that she was raped at gunpoint during a home invasion. The woman told Jones that she had been restrained with duct tape during the attack, and that the tape might still be at her house. Hendrix, Jones wrote, “never followed up or even reached out to the woman again.” Curtis Fields, a jail employee who had recently been fired, got a letter of his own published in the Gazette.
He wrote that the sheriff’s “maladministration” was “flat-out embarrassing to our entire county,” and, worse, put “many cases at risk.” Around this time, Hendrix was moved over to run the jail, and Clardy hired Alicia Manning’s older brother, Mike, to be the new undersheriff. Mike, who had long worked part time as a local law-enforcement officer, owned IN-Sight Technologies, a contractor that provided CCTV, security, and I.T. services to the county, including the sheriff’s department. The Willinghams observed that his new position created a conflict of interest. In late December, the day after Mike’s appointment, Chris and Bruce went to ask him about it. Mike said that he had resigned as IN-Sight’s C.E.O. that very day and, after some prodding, acknowledged that he had transferred ownership of the company—to his wife. He assured the Willinghams that IN-Sight’s business with McCurtain County was “minuscule.” According to records that I requested from the county clerk, McCurtain County has issued at least two hundred and thirty-nine thousand dollars in purchase orders to the company since 2016. The county commissioners have authorized at least eighty thousand dollars in payments to IN-Sight since Mike became undersheriff.
Mike urged the Willinghams to focus on more important issues. When he said, “I’m not here to be a whipping post, because there’s a lot of crime going on right now,” Chris replied, “Oh, yeah, I agree.” The undersheriff claimed to have no problem with journalists, saying, “I’m a constitutional guy.” State “sunshine” laws require government officials to do the people’s business in public: most records must be accessible to anyone who wishes to see them, and certain meetings must be open to anyone who would like to attend. Bruce Willingham once wrote, “We are aggressive about protecting the public’s access to records and meetings, because we have found that if we don’t insist on both, often no one else will.” The Center for Public Integrity grades each state on the quality of its open-government statutes and practices. At last check, Oklahoma, along with ten other states, got an F.
In January, 2022, Chris noticed a discrepancy between the number of crimes listed in the sheriff’s logbook and the correlating reports made available to him. Whereas he once saw thirty to forty reports per week, he now saw fewer than twenty. “The ones that I get are like ‘loose cattle on somebody’s land,’ all very minor stuff,” he told me. He often didn’t find out about serious crime until it was being prosecuted. In his next article, he wrote that fifty-three reports were missing, including information about “a shooting, a rape, an elementary school teacher being unknowingly given marijuana cookies by a student and a deputy allegedly shooting out the tires” of a car. The headline was “ Sheriff Regularly Breaking Law Now.
” Two weeks later, the sheriff’s department landed back on page 1 after four felons climbed through the roof of the jail, descended a radio tower, and fled—the first escape in twenty-three years. Chris reported that prisoners had been sneaking out of the jail throughout the winter to pick up “drugs, cell phones and beer” at a nearby convenience store.
Three of the escapees were still at large when, late one Saturday night in February, Alyssa Walker-Donaldson, a former Miss McCurtain County, vanished after leaving a bar in Hochatown. When the sheriff’s department did not appear to be exacting in its search, volunteers mounted their own. It was a civilian in a borrowed Cessna who spotted Walker-Donaldson’s white S.U.V. at the bottom of Broken Bow Lake. An autopsy showed that she had suffered acute intoxication by alcohol and drowned in what was described as an accident. The findings failed to fully explain how Walker-Donaldson, who was twenty-four, wound up in the water, miles from where she was supposed to be, near a boat ramp at the end of a winding road. “Even the U.P.S. man can’t get down there,” Walker-Donaldson’s mother, Carla Giddens, told me. Giddens wondered why all five buttons on her daughter’s high-rise jeans were undone, and why her shirt was pushed above her bra. She told a local TV station, “Nothing was handled right when it came to her.” Giddens suspected that the sheriff’s disappointing search could be attributed to the fact that her daughter was Black and Choctaw. (She has since called for a new investigation.) Not long after that, the sheriff’s department responded to a disturbance at a roadside deli. A deputy, Matt Kasbaum, arrived to find a man hogtied on the pavement; witnesses, who said that the man had broken a door and was trying to enter people’s vehicles, had trussed him with cord. “Well, this is interesting,” Kasbaum remarked. He handcuffed the man, Bobby Barrick, who was forty-five, then cut loose the cord and placed him in the back seat of a patrol unit. An E.M.S. crew arrived to examine Barrick. “He’s doped up hard ,” Kasbaum warned. When he opened the door, Barrick tried to kick his way out, screaming “Help me!” and “They’re gonna kill me!” As officers subdued him, Barrick lost consciousness. Several days later, he died at a hospital in Texas.
The public initially knew little of this because the sheriff refused to release information, on the ground that Barrick belonged to the Choctaw Nation and therefore the arrest fell under the jurisdiction of tribal police. The Willinghams turned to the Reporters Committee for Freedom of the Press, a nonprofit, headquartered in Washington, D.C., that provides pro-bono legal services to journalists. (The Reporters Committee has also assisted The New Yorker.
) The organization had recently assigned a staff attorney to Oklahoma, an indication of how difficult it is to pry information from public officials there. Its attorneys helped the Gazette sue for access to case documents; the paper then reported that Kasbaum had tased Barrick three times on his bare hip bone. Barrick’s widow filed a lawsuit, alleging that the taser was not registered with the sheriff’s department and that deputies had not been trained to use it. The suit also alleged that Kasbaum and other officers had turned off their lapel cameras during the encounter and put “significant pressure on Barrick’s back while he was in a face-down prone position and handcuffed.” Kasbaum, who denied the allegations, left the force. The Gazette reported that the F.B.I. and the Oklahoma State Bureau of Investigation were looking into the death.
Chris and Angie got married soon after joining the Gazette.
By the time Chris began publishing his series on the sheriff’s department, they were in their late thirties, with small children, two dogs, and a house on a golf course. They once had a bluegrass band, Succotash, in which Angie played Dobro and Chris played everything, mainly fiddle. He taught music lessons and laid down tracks for clients at his in-home studio. Angie founded McCurtain Mosaics, working with cut glass. The couple, who never intended to become journalists, suppressed the occasional urge to leave the Gazette , knowing that they would be hard to replace. Bruce lamented, “Everybody wants to work in the big city.” Five days a week, in the newsroom, Chris and Angie sit in high-walled cubicles, just outside Bruce’s office. The Gazette’s other full-time reporters include Bob West, who is eighty-one and has worked at the paper for decades. An ardent chronicler of museum events, local schools, and the weather, West is also known, affectionately, as the staffer most likely to leave his car running, with the windows down, in the rain, or to arrive at work with his toothbrush in his shirt pocket. He once leaned on his keyboard and accidentally deleted the newspaper’s digital Rolodex. One afternoon in May, he ambled over to Angie’s desk, where the Willinghams and I were talking, and announced, “Hail, thunderstorms, damaging winds!” A storm was coming.
Bruce and Gwen Willingham own commercial real estate, and they rent several cabins to vacationers in Hochatown. Chris said, “If we didn’t have tourism to fall back on, we couldn’t run the newspaper. The newspaper loses money.” An annual subscription costs seventy-one bucks; the rack price is fifty cents on weekdays, seventy-five on the weekend. During the pandemic, the Willinghams reduced both the publishing schedule and the size of the broadsheet, to avoid layoffs. The paper’s receptionist, who is in her sixties, has worked there since she was a teen-ager; a former pressman, who also started in his teens, left in his nineties, when his doctor demanded that he retire. In twenty-five paces, a staffer can traverse the distance between the newsroom and the printing press—the Gazette is one of the few American newspapers that still publish on-site, or at all. Since 2005, more than one in four papers across the country have closed; according to the Medill School of Journalism, at Northwestern University, two-thirds of U.S. counties don’t have a daily paper. When Chris leads tours for elementary-school students, he schedules them for afternoons when there’s a print run, though he isn’t one to preach about journalism’s vital role in a democracy. He’s more likely to jiggle one of the thin metal printing plates, to demonstrate how stagehands mimic thunder.
The Gazette is one of the few American newspapers that still print on-site, or at all. Since 2005, more than one in four have closed.
Bruce Willingham, who has been in the small-town-news business for more than five decades, struggles to find good reporters. “Everyone wants to work in the big city,” he said.
As the Walker-Donaldson case unfolded, Chris got a tip that the sheriff used meth and had been “tweaking” during the search for her. Bruce asked the county commissioners to require Clardy to submit to a drug test. Urinalysis wasn’t good enough—the Gazette wanted a hair-follicle analysis, which has a much wider detection window. The sheriff peed in a cup. Promptly, prominently, the Gazette reported the results, which were negative, but noted that Clardy had declined the more comprehensive test.
“This has to stop!” the sheriff posted on the department’s Facebook page. Complaining about “the repeated attacks on law enforcement,” he wrote, “We have a job to do and that is to protect people. We can’t cater to the newspaper or social media every day of the week.” Clardy blamed the Gazette’s reporting on “former employees who were terminated or resigned.” Locals who were following the coverage and the reactions couldn’t decide what to make of the devolving relationship between the Gazette and county leadership. Was their tiny newspaper needlessly antagonizing the sheriff, or was it insisting on accountability in the face of misconduct? Craig Young, the mayor of Idabel, told me that he generally found the paper’s reporting to be accurate; he also said that the county seemed to be doing a capable job of running itself. He just hoped that nothing would disrupt Idabel’s plans to host an upcoming event that promises to draw thousands of tourists. On April 8, 2024, a solar eclipse will arc across the United States, from Dallas, Texas, to Caribou, Maine. McCurtain County lies in one of the “totality zones.” According to NASA , between one-forty-five and one-forty-nine that afternoon, Idabel will experience complete darkness.
In October, 2022, Chris got another explosive tip—about himself. A local law-enforcement officer sent him audio excerpts of a telephone conversation with Captain Manning. The officer did not trust Manning, and had recorded their call. (Oklahoma is a one-party-consent state.) They discussed office politics and sexual harassment. Manning recalled that, after she was hired, a detective took bets on which co-worker would “hit it,” or sleep with her, first. Another colleague gossiped that she “gave a really good blow job.” The conversation turned to Clardy’s drug test. As retribution, Manning said that she wanted to question Chris in one of her sex-crime investigations—at a county commissioners’ meeting, “in front of everybody.” She went on, “We will see if they want to write about that in the newspaper. That’s just the way I roll. ‘O.K., you don’t wanna talk about it? Fine. But it’s “public record.” Y’all made mine and Kevin’s business public record.’ ” At the time, Manning was investigating several suspected pedophiles, including a former high-school math teacher who was accused of demanding nude photographs in exchange for favorable grades. (The teacher is now serving thirteen years in prison.) Manning told a TV news station that “possibly other people in the community” who were in a “position of power” were involved. On the recorded call, she mentioned pedophilia defendants by name and referred to Chris as “one of them.” Without citing evidence, she accused him of trading marijuana for videos of children.
Chris, stunned, suspected that Manning was just looking for an excuse to confiscate his phone. But when he started to lose music students, and his kids’ friends stopped coming over, he feared that rumors were spreading in the community. A source warned him that Manning’s accusations could lead to his children being forensically interviewed, which happens in child-abuse investigations. He developed such severe anxiety and depression that he rarely went out; he gave his firearms to a relative in case he felt tempted to harm himself. Angie was experiencing panic attacks and insomnia. “We were not managing,” she said.
That fall, as Chris mulled his options, a powerful tornado struck Idabel. Bruce and Gwen lost their home. They stored their salvaged possessions at the Gazette and temporarily moved in with Chris and Angie. In December, the Gazette announced that Chris planned to sue Manning. On March 6th, he did, in federal court, alleging “slander and intentional infliction of emotional distress” in retaliation for his reporting. Clardy was also named as a defendant, for allowing and encouraging the retaliation to take place. (Neither he nor Manning would speak with me.) In May, both Clardy and Manning answered the civil complaint in court. Clardy denied the allegations against him. Manning cited protection under the legal doctrine of qualified immunity, which is often used to indemnify law-enforcement officers from civil action and prosecution. She denied the allegations and asserted that, if Chris Willingham suffered severe emotional distress, it fell within the limits of what “a reasonable person could be expected to endure.” On the day that Chris filed his lawsuit, the McCurtain County Board of Commissioners held its regular Monday meeting, at 9 A.
M.
, in a red brick building behind the jail. Commissioners—there are three in each of Oklahoma’s seventy-seven counties—oversee budgets and allocate funding. Their meeting agendas must be public, so that citizens can scrutinize government operations. Bruce, who has covered McCurtain’s commissioners for more than forty years, suspected the board of discussing business not listed on the agenda—a potential misdemeanor—and decided to try to catch them doing it.
Two of the three commissioners—Robert Beck and Mark Jennings, the chairman—were present, along with the board’s executive assistant, Heather Carter. As they neared the end of the listed agenda, Bruce slipped a recording device disguised as a pen into a cup holder at the center of the conference table. “Right in front of ’em,” he bragged. He left, circling the block for the next several hours as he waited for the commissioners to clear out. When they did, he went back inside, pretended to review some old paperwork, and retrieved the recording device.
That night, after Gwen went to bed, Bruce listened to the audio, which went on for three hours and thirty-seven minutes. He heard other county officials enter the room, one by one—“Like, ‘Now is your time to see the king.’ ” In came Sheriff Clardy and Larry Hendrix. Jennings, whose family is in the timber business, brought up the 2024 race for sheriff. He predicted numerous candidates, saying, “They don’t have a goddam clue what they’re getting into, not in this day and age.” It used to be, he said, that a sheriff could “take a damn Black guy and whup their ass and throw ’em in the cell.” “Yeah, well, it’s not like that no more,” Clardy said.
“I know,” Jennings said. “Take ’em down there on Mud Creek and hang ’em up with a damn rope. But you can’t do that anymore. They got more rights than we got.” After a while, Manning joined the meeting. She arrived to a boisterous greeting from the men in the room. When she characterized a colleague’s recent comment about her legs as sexual harassment, Beck replied, “I thought sexual harassment was only when they held you down and pulled you by the hair.” They joked about Manning mowing the courthouse lawn in a bikini.
Manning continually steered the conversation to the Gazette.
Jennings suggested procuring a “worn-out tank,” plowing it into the newspaper’s office, and calling it an accident. The sheriff told him, “You’ll have to beat my son to it.” (Clardy’s son is a deputy sheriff.) They laughed.
Manning talked about the possibility of bumping into Chris Willingham in town: “I’m not worried about what he’s gonna do to me, I’m worried about what I might do to him.” A couple of minutes later, Jennings said, “I know where two big deep holes are here, if you ever need them.” “I’ve got an excavator,” the sheriff said.
“Well, these are already pre-dug,” Jennings said. He went on, “I’ve known two or three hit men. They’re very quiet guys. And would cut no fucking mercy.” Bruce had been threatened before, but this felt different. According to the U.S. Press Freedom Tracker, forty-one journalists in the country were physically assaulted last year. Since 2001, at least thirteen have been killed. That includes Jeff German, a reporter at the Las Vegas Review-Journal , who, last fall, was stabbed outside his home in Clark County. The county’s former administrator, Robert Telles, has been charged with his murder. Telles had been voted out of office after German reported that he contributed to a hostile workplace and had an inappropriate relationship with an employee. (Telles denied the reporting and has pleaded not guilty.) When Bruce urged Chris to buy more life insurance, Chris demanded to hear the secret recording. The playback physically sickened him. Bruce took the tape to the Idabel Police Department. Mark Matloff, the district attorney, sent it to state officials in Oklahoma City, who began an investigation.
Chris started wearing an AirTag tracker in his sock when he played late-night gigs. He carried a handgun in his car, then stopped—he and Angie worried that an officer could shoot him and claim self-defense. He talked incessantly about “disappearing” to another state. At one point, he told his dad, “I cursed our lives by deciding to move here.” It was tempting to think that everybody was watching too much “Ozark.” But one veteran law-enforcement official took the meeting remarks seriously enough to park outside Chris and Angie’s house at night, to keep watch. “There’s an undertone of violence in the whole conversation,” this official told me. “We’re hiring a hit man, we’re hanging people, we’re driving vehicles into the McCurtain Gazette.
These are the people that are running your sheriff’s office.” On Saturday, April 15th, the newspaper published a front-page article, headlined “ County officials discuss killing, burying Gazette reporters.
” The revelation that McCurtain County’s leadership had been caught talking wistfully about lynching and about the idea of murdering journalists became global news. “Both the FBI and the Oklahoma Attorney General’s Office now have the full audio,” the Gazette reported. (The McCurtain County Board of Commissioners declined to speak with me. A lawyer for the sheriff’s office wrote, in response to a list of questions, that “numerous of your alleged facts are inaccurate, embellished or outright untrue.”) On the eve of the story’s publication, Chris and his family had taken refuge in Hot Springs, Arkansas. They were still there when, that Sunday, Kevin Stitt, the governor of Oklahoma, publicly demanded the resignations of Clardy, Manning, Hendrix, and Jennings. The next day, protesters rallied at the McCurtain County commissioners’ meeting. Jennings, the board’s chairman, resigned two days later. No one else did. The sheriff’s department responded to the Gazette’s reporting by calling Bruce’s actions illegal and the audio “altered.” (Chris told me that he reduced the background noise in the audio file before Bruce took it to the police.) People wanted to hear the recording, not just read about it, but the Gazette had no Web site. No one had posted on the newspaper’s Facebook page since 2019, when Kiara Wimbley won the Little Miss Owa Chito pageant. The Willinghams published an oversized QR code on the front page of the April 20th issue, linking to a Dropbox folder that contained the audio and Angie’s best attempt at a transcript. They eventually put Chris’s articles online.
In a rare move, the seventeen-member board of the Oklahoma Sheriffs’ Association voted unanimously to suspend the memberships of Clardy, Manning, and Hendrix. The censure blocked them from conferences and symbolically ostracized them from Oklahoma’s seventy-six other sheriffs. “When one goes bad, it has a devastating effect on everybody,” Ray McNair, the executive director, told me. Craig Young, Idabel’s mayor, said, “It kind of hurt everyone to realize we’ve had these kind of leaders in place.” Young was among those who hoped that Gentner Drummond, the attorney general, would depose the sheriff “so we can start to recover.” But, on June 30th, Drummond ended his investigation by informing Governor Stitt that although the McCurtain County officials’ conversation was “inflammatory” and “offensive,” it wasn’t criminal. There would be no charges. If Clardy were to be removed from office, voters would have to do it.
Decades ago, Bruce launched “Call the Editor,” a regular feature on the Gazette’s opinion page. Readers vent anonymously to the newspaper’s answering machine, and Bruce publishes some of the transcribed messages. When the world ran out of answering machines, he grudgingly upgraded to digital, which requires plugging the fax cable into his computer every afternoon at five and switching it back the next morning. A caller might refer to Nancy Pelosi and Chuck Schumer as “buffoons,” or ask, Why is the Fire Department charging me a fifty-cent fee? There have been many recent messages about the sheriff and the commissioners, including direct addresses to Clardy: “The people aren’t supposed to be scared . . . of you or others that wear a badge.” Bruce and Gwen worried that the ongoing stress would drive Chris and Angie away from the Gazette —and from McCurtain County. Sure enough, they’re moving to Tulsa. Angie told me, “We’re forty years old. We’ve been doing this half our lives. At some point, we need to think of our own happiness, and our family’s welfare.” Bruce protested, but he couldn’t much blame them. ♦ The newspaper managed to secretly record a county meeting and caught officials talking about the idea of killing Gazette reporters.
New Yorker Favorites First she scandalized Washington. Then she became a princess.
The unravelling of an expert on serial killers.
What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too.
The meanings of the Muslim head scarf.
The slippery scams of the olive-oil industry.
Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker.
Weekly E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
News Desk By Charles Bethea Annals of Law Enforcement By Eyal Press The Sporting Scene By Louisa Thomas Daily Comment By Eric Lach Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q.
Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info
" |
135 | 2,021 | "Among the Insurrectionists at the Capitol | The New Yorker" | "https://www.newyorker.com/magazine/2021/01/25/among-the-insurrectionists" | "Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert A Reporter at Large Among the Insurrectionists By Luke Mogelson The attack on the Capitol was a predictable culmination of a months-long ferment. Throughout the pandemic, right-wing protesters had been gathering at statehouses, demanding entry and shouting things like “Treason!” and “Let us in!” Photograph by Balazs Gardi for The New Yorker Save this story Save this story Save this story Save this story By the end of President Donald Trump’s crusade against American democracy—after a relentless deployment of propaganda, demagoguery, intimidation, and fearmongering aimed at persuading as many Americans as possible to repudiate their country’s foundational principles—a single word sufficed to nudge his most fanatical supporters into open insurrection. Thousands of them had assembled on the Mall, in Washington, D.C., on the morning of January 6th, to hear Trump address them from a stage outside the White House. From where I stood, at the foot of the Washington Monument, you had to strain to see his image on a jumbotron that had been set up on Constitution Avenue. His voice, however, projected clearly through powerful speakers as he rehashed the debunked allegations of massive fraud which he’d been propagating for months. Then he summarized the supposed crimes, simply, as “bullshit.” “Bullshit! Bullshit!” the crowd chanted. It was a peculiar mixture of emotion that had become familiar at pro-Trump rallies since he lost the election: half mutinous rage, half gleeful excitement at being licensed to act on it. The profanity signalled a final jettisoning of whatever residual deference to political norms had survived the past four years. In front of me, a middle-aged man wearing a Trump flag as a cape told a young man standing beside him, “There’s gonna be a war.” His tone was resigned, as if he were at last embracing a truth that he had long resisted. “I’m ready to fight,” he said. The young man nodded. He had a thin mustache and hugged a life-size mannequin with duct tape over its eyes, “ traitor ” scrawled on its chest, and a noose around its neck.
“We want to be so nice ,” Trump said. “We want to be so respectful of everybody, including bad people. We’re going to have to fight much harder. And Mike Pence is going to have to come through for us.” About a mile and a half away, at the east end of the Mall, Vice-President Pence and both houses of Congress had convened to certify the Electoral College votes that had made Joe Biden and Kamala Harris the next President and Vice-President of the United States. In December, a hundred and forty Republican representatives—two-thirds of the caucus—had said that they would formally object to the certification of several swing states. Fourteen Republican senators, led by Josh Hawley, of Missouri, and Ted Cruz , of Texas, had joined the effort. The lawmakers lacked the authority to overturn the election, but Trump and his allies had concocted a fantastical alternative: Pence, as the presiding officer of the Senate, could single-handedly nullify votes from states that Biden had won. Pence, though, had advised Congress that the Constitution constrained him from taking such action.
“After this, we’re going to walk down, and I’ll be there with you,” Trump told the crowd. The people around me exchanged looks of astonishment and delight. “We’re going to walk down to the Capitol, and we’re going to cheer on our brave senators and congressmen and women. We’re probably not going to be cheering so much for some of them—because you’ll never take back our country with weakness. You have to show strength.” “If they didn’t want us to eat it, why’d they give us this big fork and spoon?” Cartoon by Zachary Kanin Copy link to cartoon Copy link to cartoon Link copied Shop Shop “No weakness!” a woman cried.
Before Trump had even finished his speech, approximately eight thousand people started moving up the Mall. “We’re storming the Capitol!” some yelled.
There was an eerie sense of inexorability, the throngs of Trump supporters advancing up the long lawn as if pulled by a current. Everyone seemed to understand what was about to happen. The past nine weeks had been steadily building toward this moment. On November 7th, mere hours after Biden’s win was projected, I attended a protest at the Pennsylvania state capitol, in Harrisburg. Hundreds of Trump supporters, including heavily armed militia members, vowed to revolt. When I asked a man with an assault rifle—a “combat-skills instructor” for a militia called the Pennsylvania Three Percent—how likely he considered the prospect of civil conflict, he told me, “It’s coming.” Since then, Trump and his allies had done everything they could to spread and intensify this bitter aggrievement. On December 5th, Trump acknowledged, “I’ve probably worked harder in the last three weeks than I ever have in my life.” (He was not talking about managing the pandemic , which since the election has claimed a hundred and fifty thousand American lives.) Militant pro-Trump outfits like the Proud Boys—a national organization dedicated to “reinstating a spirit of Western chauvinism” in America—had been openly gearing up for major violence. In early January, on Parler, an unfiltered social-media site favored by conservatives, Joe Biggs, a top Proud Boys leader, had written, “Every law makers who breaks their own stupid Fucking laws should be dragged out of office and hung.” On the Mall, a makeshift wooden gallows, with stairs and a rope, had been constructed near a statue of Ulysses S. Grant. Some of the marchers nearby carried Confederate flags. Up ahead, the dull thud of stun grenades could be heard, accompanied by bright flashes. “They need help!” a man shouted. “It’s us versus the cops!” Someone let out a rebel yell. Scattered groups wavered, debating whether to join the confrontation. “We lost the Senate—we need to make a stand now ,” a bookish-looking woman in a down coat and glasses appealed to the person next to her. The previous day, a runoff in Georgia had flipped two Republican Senate seats to the Democrats, giving them majority control.
Hundreds of Trump supporters had forced their way past barricades to the Capitol steps. In anticipation of Biden’s Inauguration, bleachers had been erected there, and the sides of the scaffolding were wrapped in ripstop tarpaulin. Officers in riot gear blocked an open flap in the fabric; the mob pressed against them, screaming insults.
“You are traitors to the country!” a man barked at the police through a megaphone plastered with stickers from “InfoWars,” the incendiary Web program hosted by the right-wing conspiracist Alex Jones.
Behind the man stood Biggs, the Proud Boys leader. He wore a radio clipped onto the breast pocket of his plaid flannel shirt. Not far away, I spotted a “straight pride” flag.
There wasn’t nearly enough law enforcement to fend off the mob, which pelted the officers with cans and bottles. One man angrily invoked the pandemic lockdown: “Why can’t I work? Where’s my ‘pursuit of happiness’?” Many people were equipped with flak jackets, helmets, gas masks, and tactical apparel. Guns were prohibited for the protest, but a man in a cowboy hat, posing for a photograph, lifted his jacket to reveal a revolver tucked into his waistband. Other Trump supporters had Tasers, baseball bats, and truncheons. I saw one man holding a coiled noose.
“Hang Mike Pence!” people yelled.
On the day Joe Biden’s win was projected, hundreds of Trump supporters protested at the Pennsylvania state capitol.
Photograph by Balazs Gardi for The New Yorker Soon the mob swarmed past the officers, into the understructure of the bleachers, and scrambled through its metal braces, up the building’s granite steps. Toward the top was a temporary security wall with three doors, one of which was instantly breached. Dozens of police stood behind the wall, using shields, nightsticks, and pepper spray to stop people from crossing the threshold. Other officers took up positions on planks above, firing a steady barrage of nonlethal munitions into the solid mass of bodies. As rounds tinked off metal, and caustic chemicals filled the space as if it were a fumigation tent, some of the insurrectionists panicked: “We need to retreat and assault another point!” But most remained resolute. “Hold the line!” they exhorted. “Storm!” Martial bagpipes blared through portable speakers.
“Shoot the politicians!” somebody yelled.
“Fight for Trump!” A jet of pepper spray incapacitated me for about twenty minutes. When I regained my vision, the mob was streaming freely through all three doors. I followed an overweight man in a Roman-era costume—sandals, cape, armguards, dagger—away from the bleachers and onto an open terrace on the Capitol’s main level. People clambered through a shattered window. Video later showed that a Proud Boy had smashed it with a riot shield. A dozen police stood in a hallway softly lit by ornate chandeliers, mutely watching the rioters—many of them wearing Trump gear or carrying Trump flags—flood into the building. Their cries resonated through colonnaded rooms: “Where’s the traitors?” “Bring them out!” “Get these fucking cocksucking Commies out!” The attack on the Capitol was a predictable apotheosis of a months-long ferment. Throughout the pandemic, right-wing protesters had been gathering at statehouses, demanding entry. In April, an armed mob had filled the Michigan state capitol, chanting “Treason!” and “Let us in!” In December, conservatives had broken the glass doors of the Oregon state capitol, overrunning officers and spraying them with chemical agents. The occupation of restricted government sanctums was an affirmation of dominance so emotionally satisfying that it was an end in itself—proof to elected officials, to Biden voters, and also to the occupiers themselves that they were still in charge. After one of the Trump supporters breached the U.S. Capitol, he insisted through a megaphone, “We will not be denied.” There was an unmistakable subtext as the mob, almost entirely white, shouted, “Whose house? Our house!” One man carried a Confederate flag through the building. A Black member of the Capitol Police later told BuzzFeed News that, during the assault, he was called a racial slur fifteen times.
I followed a group that broke off to advance on five policemen guarding a side corridor. “Stand down,” a man in a maga hat commanded. “You’re outnumbered. There’s a fucking million of us out there, and we are listening to Trump—your boss.” “We can take you out,” a man beside him warned.
The officers backpedalled the length of the corridor, until we arrived at a marble staircase. Then they moved aside. “We love you guys—take it easy!” a rioter yelled as he bounded up the steps, which led to the Capitol’s central rotunda.
On an open terrace on the U.S. Capitol’s main level, Trump supporters clambered through a shattered window. “Where’s the traitors?” they shouted.
Photograph by Balazs Gardi for The New Yorker Beneath the soaring dome, surrounded by statues of former Presidents and by large oil paintings depicting such historical scenes as the embarkation of the Pilgrims and the presentation of the Declaration of Independence, a number of young men chanted, “America first!” The phrase was popularized in 1940 by Nazi sympathizers lobbying to keep the U.S. out of the Second World War ; in 2016, Trump resurrected it to describe his isolationist foreign and immigration policies. Some of the chanters, however, waved or wore royal-blue flags inscribed with “ AF ,” in white letters. This is the logo for the program “America First,” which is hosted by Nicholas Fuentes, a twenty-two-year-old Holocaust denier, who promotes a brand of white Christian nationalism that views politics as a means of preserving demographic supremacy. Though America Firsters revile most mainstream Republicans for lacking sufficient commitment to this priority—especially neoconservatives, whom they accuse of being subservient to Satan and Jews—the group’s loyalty to Trump is, according to Fuentes, “unconditional.” The America Firsters and other invaders fanned out in search of lawmakers, breaking into offices and revelling in their own astounding impunity. “Nancy, I’m ho-ome! ” a man taunted, mimicking Jack Nicholson’s character in “The Shining.” Someone else yelled, “1776—it’s now or never.” Around this time, Trump tweeted, “Mike Pence didn’t have the courage to do what should have been done to protect our Country. . . . USA demands the truth!” Twenty minutes later, Ashli Babbitt, a thirty-five-year-old woman from California, was fatally shot while climbing through a barricaded door that led to the Speaker’s lobby in the House chamber, where representatives were sheltering. The congresswoman Alexandria Ocasio-Cortez , a Democrat from New York, later said that she’d had a “close encounter” with rioters during which she thought she “was going to die.” Earlier that morning, another representative, Lauren Boebert—a newly elected Republican, from Colorado, who has praised QAnon and promised to wear her Glock in the Capitol—had tweeted, “Today is 1776.” When Babbitt was shot, I was on the opposite side of the Capitol, where people were growing frustrated by the empty halls and offices.
“Where the fuck are they?” “Where the fuck is Nancy?” No one seemed quite sure how to proceed. “While we’re here, we might as well set up a government,” somebody suggested.
Then a man with a large “ AF ” flag—college-age, cheeks spotted with acne—pushed through a series of tall double doors, the last of which gave onto the Senate chamber.
“Praise God!” “I don’t like ironing, but it reminds me that once, long, long ago, there was a semblance of order in the world.” Cartoon by Victoria Roberts Copy link to cartoon Copy link to cartoon Link copied Shop Shop There were signs of a hasty evacuation: bags and purses on the plush blue-and-red carpet, personal belongings on some of the desks. From the gallery, a man in a flak jacket called down, “Take everything! Take all that shit!” “No!” an older man, who wore an ammo vest and held several plastic flex cuffs, shouted. “We do not take anything.” The man has since been identified as Larry Rendall Brock, Jr., a retired Air Force lieutenant colonel.
The young America Firster went directly to the dais and installed himself in the leather chair recently occupied by the Vice-President. Another America Firster filmed him extemporizing a speech: “Donald Trump is the emperor of the United States . . .” “Hey, get out of that chair,” a man about his age, with a thick Southern drawl, said. He wore cowhide work gloves and a camouflage hunting jacket that was several sizes too large for him. Gauze hung loosely around his neck, and blood, leaking from a nasty wound on his cheek, encrusted his beard. Later, when another rioter asked for his name, he responded, “Mr. Black.” The America Firster turned and looked at him uncertainly.
“We’re a democracy,” Mr. Black said.
“Bro, we just broke into the Capitol,” the America Firster scoffed. “What are you talking about?” Brock, the Air Force veteran, said, “We can’t be disrespectful.” Using the military acronym for “information operations,” he explained, “You have to understand—it’s an I.O. war.” The America Firster grudgingly left the chair. More than a dozen Trump supporters filed into the chamber. A hundred antique mahogany desks with engraved nameplates were arranged in four tiered semicircles. Several people swung open the hinged desktops and began rifling through documents inside, taking pictures with their phones of private notes and letters, partly completed crossword puzzles, manuals on Senate procedure. A man in a construction hard hat held up a hand-signed document, on official stationery, addressed from “Mitt” to “Mike”—presumably, Romney and Pence. It was the speech that Romney had given, in February, 2020, when he voted to impeach Trump for pressuring the President of Ukraine to produce dirt on Biden. “Corrupting an election to keep oneself in office is perhaps the most abusive and disruptive violation of one’s oath of office that I can imagine,” Romney had written.
Armed militia members attended a Stop the Steal rally in Harrisburg, Pennsylvania, on November 7th.
Photograph by Balazs Gardi for The New Yorker Some senators had printed out their prepared remarks for the election certification that the insurrectionists had disrupted. The man in the hard hat found a piece of paper belonging to Ted Cruz and said, “He was gonna sell us out all along—look! ‘Objection to counting the electoral votes of the state of Arizona.’ ” He paused. “Oh, wait, that’s actually O.K.” “He’s with us,” an America Firster said.
Another young man, wearing sweatpants and a long-sleeved undershirt, seemed unconvinced. Frantically flipping through a three-ring binder on Cruz’s desk, he muttered, “There’s gotta be something in here we can fucking use against these scumbags.” Someone looking on commented, with serene confidence, “Cruz would want us to do this, so I think we’re good.” Mr. Black wandered around in a state of childlike wonder. “This don’t look big enough,” he muttered. “This can’t be the right place.” On January 14th, Joshua Black was arrested, in Leeds, Alabama, after he posted a confession on YouTube in which he explained, “I just felt like the spirit of God wanted me to go in the Senate room.” On the day of the riot, as he took in the chamber, he ordered everyone, “Don’t trash the place. No disrespect.” After a while, rather than defy him, nearly everybody left the chamber. For a surreal interlude, only a few people remained. Black’s blood-smeared cheek was grotesquely swollen, and as I looked closer I glimpsed the smooth surface of a yellow plastic projectile embedded deeply within it.
“I’m gonna call my dad,” he said, and sat down on the floor, leaning his back against the dais.
A moment later, the door at the back of the chamber’s center aisle swung open, and a man strode through it wearing a fur headdress with horns, carrying a spear attached to an American flag. He was shirtless, his chest covered with Viking and pagan tattoos, his face painted red, white, and blue. It was Jacob Chansley, a vocal QAnon proponent from Arizona, popularly known by his pseudonym, the Q Shaman. Both on the Mall and inside the Capitol, I’d seen countless signs and banners promoting QAnon, whose acolytes believe that Trump is working to dismantle an occult society of cannibalistic pedophiles. At the base of the Washington Monument, I’d watched Chansley assure people, “We got ’em right where we want ’em! We got ’em by the balls, baby, and we’re not lettin’ go!” “ Fuckin’ A , man,” he said now, looking around with an impish grin. A young policeman had followed closely behind him. Pudgy and bespectacled, with a medical mask over red facial hair, he approached Black, and asked, with concern, “You good, sir? You need medical attention?” “I’m good, thank you,” Black responded. Then, returning to his phone call, he said, “I got shot in the face with some kind of plastic bullet.” “Any chance I could get you guys to leave the Senate wing?” the officer inquired. It was the tone of someone trying to lure a suicidal person into climbing down from a ledge.
“We will,” Black assured him. “I been making sure they ain’t disrespectin’ the place.” “O.K., I just want to let you guys know—this is, like, the sacredest place.” Chansley had climbed onto the dais. “I’m gonna take a seat in this chair, because Mike Pence is a fucking traitor,” he announced. He handed his cell phone to another Trump supporter, telling him, “I’m not one to usually take pictures of myself, but in this case I think I’ll make an exception.” The policeman looked on with a pained expression as Chansley flexed his biceps.
Rioters forced their way past barricades to the Capitol steps, over which bleachers had been erected in anticipation of Biden’s Inauguration. There wasn’t nearly enough law enforcement to fend off the mob.
Photograph by Balazs Gardi for The New Yorker A skinny man in dark clothes told the officer, “This is so weird—like, you should be stopping us.” The officer pointed at each person in the chamber: “One, two, three, four, five.” Then he pointed at himself: “One.” After Chansley had his photographs, the officer said, “Now that you’ve done that, can I get you guys to walk out of this room, please?” “Yes, sir,” Chansley said. He stood up and took a step, but then stopped. Leaning his spear against the Vice-President’s desk, he found a pen and wrote something on a sheet of paper.
“I feel like you’re pushing the line,” the officer said.
Chansley ignored him. After he had set down the pen, I went behind the desk. Over a roll-call list of senators’ names, the Q Shaman had scrawled, “ its only a matter of time / justice is coming !” The Capitol siege was so violent and chaotic that it has been hard to discern the specific political agendas of its various participants. Many of them, however, went to D.C. for two previous events, which were more clarifying. On November 14th, tens of thousands of Republicans, convinced that the Democrats had subverted the will of the people in what amounted to a bloodless coup, marched to the Supreme Court , demanding that it overturn the election. For four years, Trump had batted away every inconvenient fact with the phrase “fake news,” and his base believed him when he attributed his decisive defeat in both the Electoral College and the popular vote to “rigged” machines and “massive voter fraud.” While the President’s lawyers inundated battleground states with spurious litigation, one of them, during an interview on Fox Business, acknowledged the basis of their strategy: “We’re waiting for the United States Supreme Court, of which the President has nominated three Justices, to step in and do something.” After nearly every suit had collapsed—with judges appointed by Republicans and Democrats alike harshly criticizing the accusations as “speculative,” “incorrect,” and “not credible,” and Trump’s own Justice Department vouching for the integrity of the election—the attorney general of Texas petitioned the Supreme Court to invalidate all the votes from Wisconsin, Georgia, Pennsylvania, and Michigan (swing states that went for Biden). On December 11th, the night before the second D.C. demonstration, the Justices declined to hear the case, dispelling once and for all the fantasy that Trump, despite losing the election, might legally remain in office.
The next afternoon, throngs of Trump supporters crowded into Freedom Plaza, an unadorned public square equidistant from the Justice Department and the White House. On one side, a large audience pressed around a group of preppy-looking young men wearing plaid shirts, windbreakers, khakis, and sunglasses. Some held rosaries and crosses, others royal-blue “ AF ” flags. The organizers had not included Fuentes, the “America First” host, in their lineup, but when he arrived at Freedom Plaza the crowd parted for him, chanting, “Groyper!” The name, which America Firsters call one another, derives from a variation of the Pepe the Frog meme, which is fashionable among white supremacists.
Diminutive and clean-shaven, with boyish features and a toothy smile, Fuentes resembled, in his suit and red tie, a recent graduate dressed for a job interview. (He dropped out of Boston University after his freshman year, when other students became hostile toward him for participating in the deadly neo-Nazi rally in Charlottesville , Virginia, in 2017, and for writing on Facebook that “a tidal wave of white identity is coming.”) Fuentes climbed atop a granite retaining wall, and someone handed him a megaphone. As his speech approached a crescendo of indignation, more and more attendees gravitated to the groypers. “It is us and our ancestors that created everything good that you see in this country,” Fuentes said. “All these people that have taken over our country—we do not need them.” The crowd roared, “Take it back!”—a phrase that would soon ring inside the Capitol.
“It’s time for us to start saying another word again,” Fuentes shouted. “A very important word that describes the situation we’re in. That word is ‘parasite.’ What is happening in this country is parasitism.” Arguing that Trump alone represented “ our interests”—an end to all legal and illegal immigration, gay rights, abortion, free trade, and secularism—Fuentes distilled America Firstism into concise terms: “It is the American people, and our leader, Donald Trump, against everybody else in this country and this world.” The Republican governors, judges, and legislators who had refused to leverage their authority to secure Trump four more years in the White House—“traitors within our own ranks”—were on “a list” of people to be taken down. Fuentes also opposed the Constitution’s checks and balances, which had enabled Biden to prevail. “Make no mistake about it,” he declared. “The system is our enemy.” During the nine weeks between November 3rd and January 6th, extremists like Fuentes did their utmost to take advantage of the opening that Trump created for them by refusing to concede. They were frank about their intentions: undoing not just the 2020 Presidential outcome but also any form of representative government that allows Democrats to obtain and exercise power. Correctly pointing out that a majority of Republicans believed that the election had been stolen, Fuentes argued, “This is the opportunity to galvanize the patriots of this country behind a real solution to these problems that we’re facing.” He also said, “If we can’t get a country that we deserve to live in through the legitimate process, then maybe we need to begin to explore some other options.” In case anybody was confused about what those options might be, Fuentes explained, “Our Founding Fathers would get in the streets, and they would take this country back by force if necessary. And that is what we must be prepared to do.” Cartoon by Bruce Eric Kaplan Copy link to cartoon Copy link to cartoon Link copied Shop Shop In the days before January 6th, calls for a “real solution” became progressively louder. Trump, by both amplifying these voices and consolidating his control over the Republican Party, conferred extraordinary influence on the most deranged and hateful elements of the American right. On December 20th, he retweeted a QAnon supporter who used the handle @cjtruth: “It was a rigged election but they were busted. Sting of the Century! Justice is coming!” A few weeks later, a barbarian with a spear was sitting in the Vice-President’s chair.
As Fuentes wrapped up his diatribe, he noticed a drag queen standing on the periphery of the crowd. She wore a blond wig and an evening gown with a beauty-queen sash identifying her as Lady maga.
At the November D.C. rally, I had been surprised to see Trump supporters lining up to have their pictures taken with her. Now Fuentes yelled, “That is disgusting! I don’t want to see that!,” and the groypers wheeled on her, bellowing in unison, “Shame!” No one in the crowd objected.
While Fuentes was proposing a movement to “take this country back by force,” a large contingent of Proud Boys marched by. Members from Illinois, Pennsylvania, Oregon, California, and elsewhere were easy to identify. Most were dressed in the organization’s black-and-yellow colors. Some had “ rwds ”—Right-Wing Death Squad—hats and patches; others wore balaclavas, kilts, hockey masks, or batting helmets. One man was wearing a T-shirt with an image of South American dissidents being thrown out of a helicopter and the words “ pinochet did nothing wrong !” Another T-shirt featured a Nazi eagle perched on a fasces, below the acronym “6 mwe ”—Six Million Wasn’t Enough—a reference to the number of Jews slaughtered in the Holocaust.
Many of the Proud Boys were drunk. At around nine-thirty that morning, I’d stopped by Harry’s Pub, a dive bar close to Freedom Plaza, and found the street outside filled with men drinking Budweiser and White Claw. “We are going to own this town!” one of them howled. At the November 14th rally, clashes between the Proud Boys and antifascists had left a number of people injured. Although most of the fights I witnessed then had been instigated by the Proud Boys, Trump had tweeted, “ANTIFA SCUM ran for the hills today when they tried attacking the people at the Trump Rally, because those people aggressively fought back.” It was clear that the men outside Harry’s on December 12th had travelled to D.C. to engage in violence, and that they believed the President endorsed their doing so. Trump had made an appearance at the previous rally, waving through the window of his limousine; now I overheard a Proud Boy tell his comrade, “I wanna see Trump drive by and give us one of these.” He flashed an “O.K.” hand sign, which has become a gesture of allegiance among white supremacists. There would be no motorcade this time, but while Fuentes addressed the groypers Trump circled Freedom Plaza in Marine One, the Presidential helicopter.
The conspiracist Alex Jones dominated a pro-Trump rally on November 14th. “Down with the deep state!” Jones yelled. “The answer to their ‘1984’ tyranny is 1776!” Photograph by Balazs Gardi for The New Yorker The Proud Boys who marched past Fuentes at the end of his December 12th speech were heading to the Washington Monument. When I got there, hundreds of them covered the grassy expanse near the obelisk. “Let’s take Black Lives Matter Plaza!” someone suggested. In June, the security fence around the White House had been expanded, subsuming green spaces previously open to the public, in response to protests over the killing of George Floyd , in Minneapolis. Muriel Bowser, the mayor of D.C., had renamed two blocks adjacent to the fence Black Lives Matter Plaza, and commissioned the city to paint “ black lives matter ” across the pavement in thirty-five-foot-high letters. Throughout the latter half of 2020, Trump had sought to dismiss the popular uprisings that Floyd’s death had precipitated by ascribing them to Antifa , which he vilified as a terrorist organization. The Proud Boys had seized on Trump’s conflation to recast their small-scale rivalry with antifascists in leftist strongholds like Berkeley and Portland as the front line of a national culture war. During the Presidential campaign, Trump’s histrionic exaggerations of the threat posed by Antifa fuelled conservative support for the Proud Boys, allowing them to vastly expand their operations and recruitment. The day after a Presidential debate in which Trump told the Proud Boys to “stand back and stand by,” Lauren Witzke, a Republican Senate candidate in Delaware, publicly thanked the group for having provided her with “free security.” (She lost the race.) As Proud Boys from across the nation walked downhill from the Washington Monument toward Black Lives Matter Plaza on December 12th, they chanted, “Whose plaza? Our plaza!” Many of them carried staffs, canes, and holstered Maglites. There was a heavy police presence downtown, and it was still broad daylight. “We got numbers, let’s do this!” a Proud Boy with a newsboy cap and a gray goatee shouted. “Fuck these gender-confused terrorists! They’ll put the girls out first—they think that’s gonna stop us?” His name was Richard Schwetz, though he went by Dick Sweats. (He could not be reached for comment.) While some Proud Boys hesitated, others followed Schwetz, including a taciturn man with a high-and-tight military haircut and a large Confederate flag attached to a wooden dowel. I saw him again at the Capitol on January 6th.
On Constitution Avenue, the Proud Boys encountered an unsuspecting Black man coming up the sidewalk. They began shoving and jeering at him. As the man ran away, several of them chased him, swinging punches at his back.
Officers had cordoned off Black Lives Matter Plaza, but the group soon reached Farragut Square, where half a dozen counter-protesters—two men and four women—stood outside the Army and Navy Club, dressed in black clothes marked with medic crosses made from red tape. They were smaller and younger than most of the Proud Boys, and visibly unnerved. As Schwetz and others closed in on them, the medics retreated until they were pressed against a waist-high hedge. “Fucking pussies!” Schwetz barked, hitting two of the women. Other Proud Boys took his cue, assailing the activists, who disappeared into the hedge under a barrage of boots and fists. Policemen stopped the beating by deploying pepper spray, but they did not arrest any Proud Boys, who staggered off in search of a new target.
They promptly found one: another Black man, passing through on his bicycle. He wore Lycra exercise gear and looked perplexed by what was happening on the streets. He said nothing to anybody, but “Black Lives Matter” was written in small letters on his helmet. The Proud Boys surrounded him. Pointing at some officers watching from a few feet away, a man in a bulletproof vest, carrying a cane, said, “They’re here now, but eventually they won’t be. And we’re gonna take this country back—believe that shit. Fuck Black Lives Matter.” Before walking off, he added, “What y’all need to do is take your sorry asses to the ghetto.” This was the tenor of the next eight hours, as hundreds of Proud Boys, groypers, militia members, and other Trump supporters openly marauded on the streets around the White House, becoming more inebriated and belligerent as the night wore on, hunting for people to harass and assault. “Fight for Trump!” they chanted. At one point, Proud Boys outside Harry’s Pub ganged up on another Black man, Philip Johnson, who took out a knife in self-defense, wounding four of them. Police intervened and rushed Johnson to the hospital, where he was arrested. The charges were later dropped. Outside Harry’s, I heard a Proud Boy joking about Johnson’s injuries: “He’s going to look different tomorrow.” Shortly thereafter, I followed a number of groypers past a hair salon with a rainbow poster attached to its window. Tearing the poster to pieces, a young man screamed, “This is sodomy!” “Fuck the fags!” others cried.
By eleven, I was following another group, which happened upon the Metropolitan African Methodist Episcopal Church. Built in the late nineteenth century, the steepled red brick building had hosted the funerals of Frederick Douglass and Rosa Parks. President Barack Obama had attended a service there on the morning of his second Inauguration. Outside the entrance, a large Black Lives Matter sign, illuminated by floodlamps, hung below a crucifix. Climbing over a low fence, several Proud Boys and men in red maga hats ripped down the sign and pried off boards from its scaffolding to use as weapons, eliciting wild cheers.
“Whose streets?” “ Our streets!” December 12th, just after 11 P.M., outside the Metropolitan African Methodist Episcopal Church.
More people piled into the garden of the church, stomping on the sign and slashing it with knives. Amid the frenzy, one of the Trump supporters removed another placard from a different display. It had a verse from the Bible: “ I shall not sacrifice to the Lord my God that which costs me nothing.
” “Hey, that’s Christian,” someone admonished.
The man nodded and gingerly set the placard down.
The cascade of destruction and ugliness triggered by Trump’s lies about the election consummates a narrative that predates his tenure in the White House. In 2011, Trump became an evangelist for birtherism, the false assertion that Obama had been born in Kenya and was therefore an illegitimate President. Whether or not Trump believed the racist slander, he had been apprised of its political utility by his friend Roger Stone , who made his political reputation as a dirty trickster for President Richard Nixon.
Five years later, in the months before the 2016 election, Stone created a Web site called Stop the Steal, which he used to undermine Hillary Clinton ’s expected victory by insisting that the election had been rigged—a position that Trump maintained even after he won, to explain his deficit in the popular vote.
The day after the 2020 election, a new Facebook page appeared: Stop the Steal. Among its earliest posts was a video from the T.C.F. Center, in downtown Detroit, where Michigan ballots were counted. The video showed Republican protesters who were said to have been denied access to the room where absentee votes were being processed. Overnight, Stop the Steal gained more than three hundred and twenty thousand followers—making it among the fastest-growing groups in Facebook history. The company quickly deleted it.
I spent much of Election Day at the T.C.F. Center.
covid -19 had killed three thousand residents of Wayne County, which includes Detroit, causing an unprecedented number of people to vote by mail. Nearly two hundred thousand absentee ballots were being tallied in a huge exhibit hall. Roughly eight hundred election workers were opening envelopes, removing ballots from sealed secrecy sleeves, and logging names into an electronic poll book. (Before Election Day, the clerk’s office had compared and verified signatures.) The ballots were then brought to a row of high-speed tabulators, which could process some fifty sheets a minute.
Republican and Democratic challengers roamed the hall. The press was confined to a taped-off area, but, as far as I could see, the Republicans were given free rein of the space. They checked computer monitors that displayed a growing list of names. A man’s voice came over a loudspeaker to remind the election workers to “provide for transparency and openness.” Christopher Thomas, who served as Michigan’s election director for thirty-six years and advised the clerk’s office in 2020, told me that things had gone remarkably smoothly. The few challengers who’d raised objections had mostly misunderstood technical aspects of the process. “We work through it with them,” Thomas said. “We’re happy to have them here.” Early returns showed Trump ahead in Michigan, but many absentee ballots had yet to be processed. Because Trump had relentlessly denigrated absentee voting throughout the campaign, in-person votes had been expected to skew his way. It was similarly unsurprising when his lead diminished after results arrived from Wayne County and other heavily Democratic jurisdictions. Nonetheless, shortly after midnight, Trump launched his post-election misinformation campaign: “We are up BIG, but they are trying to STEAL the Election.” A makeshift wooden gallows, with stairs and a rope, was erected near the Capitol on January 6th. Since November, militant pro-Trump outfits had been openly gearing up for major violence. In early January, on Parler, a Proud Boys leader had written, “Every law makers who breaks their own stupid Fucking laws should be dragged out of office and hung.” Photograph by Balazs Gardi for The New Yorker The next day, I found an angry mob outside the T.C.F. Center. Police officers guarded the doors. Most of the protesters had driven down from Macomb County, which is eighty per cent white and went for Trump in both 2016 and 2020. “We know what’s going on here,” one man told me. “They’re stuffing the ballot box.” He said that his local Republican Party had sent out an e-mail urging people to descend on the center. Politico later reported that Laura Cox, the chairwoman of the Michigan G.O.P., had personally implored conservative activists to go there. I had seen Cox introduce Trump at a rally in Grand Rapids the night before the election; she had promised the crowd “four more years—or twelve, we’ll talk about that later.” Dozens of protesters had entered the T.C.F. Center before it was sealed. Downstairs, they pressed against a glass wall of the exhibit hall, chanting at the election workers on the other side. The most strident member of the group was Ken Licari, a Macomb County resident with a thin beard and a receding hairline. The two parties had been allocated one challenger for each table in the hall, but Republicans had already exceeded that limit, and Licari was irate about being shut out. When an elderly A.C.L.U.
observer was ushered past him, Licari demanded to know where she was from. The woman ignored him, and he shouted, “You’re a coward, is where you’re from!” “Be civil,” a woman standing near him said. A forty-eight-year-old caretaker named Lisa, she had stopped by the convention center on a whim, “just to see.” Unlike almost everyone else there, Lisa was Black and from Detroit. She gently asked Licari, “If this place has cameras, and you’ve got media observing, you’ve got different people from both sides looking—why do you think someone would be intentionally trying to cheat with all those eyes?” “You would have to have a hundred thirty-four cameras to track every ballot,” Licari answered.
“These ballots are from Detroit,” Lisa said. “Detroit is an eighty-per-cent African-American city. There’s a huge percentage of Democrats. That’s just a fact.” She gestured at the predominantly Black poll workers across the glass. “This is my whole thing—I have a basic level of respect for these people.” Rather than respond to this tacit accusation of bias, Licari told Lisa that a batch of illegal ballots had been clandestinely delivered to the center at three in the morning. This was a reference to another cell-phone video, widely shared on social media, that showed a man removing a case from the back of a van, loading it in a wagon, and pulling the wagon into the building. I had watched the video and had recognized the man as a member of a local TV news crew I’d noticed the previous day. I distinctly recall admiring the wagon, which he had used to transport his camera gear.
“There’s a lot of suspicious activity that goes on down here in Detroit,” another Republican from Macomb County told me. “There’s a million ways you can commit voter fraud, and we’re afraid it was committed on a massive scale.” I had seen the man on Election Day, working as a challenger inside the exhibit hall. Now, as then, he wore old Army dog tags and a hooded Michigan National Guard sweatshirt with the sleeves cut off. I asked him if he had observed any fraud with his own eyes. He had not. “It wasn’t committed by these people,” he said. “But the ballots that they were given and ran through the scanners—we don’t know where they came from.” Like many of the Republicans in the T.C.F. Center, the man had been involved in anti-lockdown demonstrations against Michigan’s governor, Gretchen Whitmer, a Democrat. While reporting on those protests , I’d been struck by how the mostly white participants saw themselves as upholding the tradition of the civil-rights movement. Whitmer’s public-health measures were condemned as oppressive infringements on sacrosanct liberties, and those who defied them compared themselves to Rosa Parks. The equivalency became even more bizarre after George Floyd was killed and anti-lockdown activists in Michigan adopted Trump’s law-and-order rhetoric. Yet I never had the impression that those Republican activists were disingenuous. Similarly, the white people shouting at the Black election workers in Detroit seemed truly convinced of their own persecution.
That conviction had been instilled at least in part by politicians who benefitted from it. In April, in response to Whitmer’s aggressive public-health measures, Trump had tweeted, “Liberate Michigan!” Two weeks later, heavily armed militia members entered the state capitol, terrifying lawmakers. Mike Shirkey, the Republican majority leader in the Michigan Senate, denounced the organizers of the action—a group called the American Patriot Council—as “a bunch of jackasses” who had brandished “the threat of physical harm to stir up fear and rancor.” But, as Trump and other Republicans stoked anti-lockdown resentment across the U.S., Shirkey reversed himself. In May, he appeared at an American Patriot Council event in Grand Rapids, where he told the assembled militia members, “We need you now more than ever.” A few months later, two brothers in the audience that day, William and Michael Null, were arrested for providing material support to a network of right-wing terrorists.
Trump supporters inside the Capitol on January 6th. For right-wing protesters, the occupation of restricted government sanctums was an affirmation of dominance so emotionally satisfying that it was an end in itself—proof to elected officials, to Biden voters, and also to themselves that they were still in charge.
Photograph by Balazs Gardi for The New Yorker Outside the T.C.F. Center, I ran into Michelle Gregoire, a twenty-nine-year-old school-bus driver from Battle Creek. The sleeves of her sweatshirt were pushed up to reveal a “We the People” tattoo, and she wore a handgun on her belt. We had met at several anti-lockdown protests, including the one in Grand Rapids where Shirkey spoke. In April, Gregoire had entered the gallery overlooking the House chamber in the Michigan state capitol, in violation of COVID -19 protocols. She had to be dragged out by the chief sergeant at arms, and she is now charged with committing a felony assault against him. (She has pleaded not guilty.) Gregoire is also an acquaintance of the Nulls. “They’re innocent,” she told me in Detroit. “There’s an attack on conservatives right now.” She echoed many Republicans I have met in the past nine months who have described to me the same animating emotion: fear. “A lot of conservatives are really scared,” she said. “Extreme government overreach” during the pandemic had proved that the Democrats aimed, above all, to subjugate citizens. In October, Facebook deleted Gregoire’s account, which contained posts about a militia that she belonged to at the time. She told me, “If the left gets their way, they will silence whoever they want.” She then expressed another prevalent apprehension on the right: that Democrats intend to disarm Americans, in order to render them defenseless against autocracy. “That terrifies me,” Gregoire said. “In other countries, they’ve said, ‘That will never happen here,’ and before you know it their guns are confiscated and they’re living under communism.” The sense of embattlement that Trump and other Republican politicians encouraged throughout the pandemic primed many conservatives to assume Democratic foul play even before voting began. Last month, at a State Senate hearing on the count at the T.C.F. Center, a witness, offering no evidence of fraud, demanded to see evidence that none had occurred. “We believe,” he testified. “Prove us wrong.” The witness was Randy Bishop, a conservative Christian-radio host and a former county G.O.P. chairman, as well as a felon with multiple convictions for fraud. I’d watched Bishop deliver a rousing speech in June at an American Patriot Council rally, which Gregoire and the Null brothers had attended. “Carrying a gun with you at all times and being a member of a militia is also your civic duty,” Bishop had argued. According to the F.B.I., the would-be terrorists whom the Nulls abetted used the rally to meet and further their plans, which included televised executions of Democratic lawmakers. When I was under the bleachers at the U.S. Capitol, while the mob pushed up the steps, I noticed Jason Howland, a founder of the American Patriot Council, a few feet behind me in the scrum, leaning all his weight into the mass of bodies.
Even if it were possible to prove that the election was not stolen, it seems doubtful whether conservatives who already feel under attack could be convinced. When Gregoire cited the man with the van smuggling a case of ballots into the T.C.F. Center, I told her that he was a journalist and that the case contained equipment. Gregoire shook her head. “No,” she said. “Those were ballots. It’s not a conspiracy when it’s documented and recorded.” Conspiracy theories have always helped rationalize white grievance, and people who exploit white grievance for political or financial gain often purvey conspiracy theories. Roger Stone became Trump’s adviser for the 2016 Republican primaries, and frequently appeared on Alex Jones’s “InfoWars” show, which warned that the “deep state”—a nefarious shadow authority manipulating U.S. policy for the profit of élites—opposed Trump because he threatened its power. Jones has asserted that the Bush Administration was responsible for 9/11 and that the Sandy Hook Elementary School massacre never happened. During the 2016 campaign, Stone arranged for Trump to be a guest on “InfoWars.” “I will not let you down,” Trump promised Jones.
This compact with the conspiracist right strengthened over the next four years, as the President characterized his impeachment and the special counsel Robert Mueller ’s report on Russian election meddling as “hoaxes” designed to “overthrow” him. (Stone was convicted of seven felonies related to the Mueller investigation, including making false statements and witness tampering. Trump pardoned him in December. Ten days later, Stone reactivated his Stop the Steal Web site, which began collecting donations for “security” in D.C. on January 6th.) This past year, the scale of the pandemic helped conspiracists broaden the scope of their theories. Many covid -19 skeptics believe that lockdowns, mask mandates, vaccines, and contact tracing are laying the groundwork for the New World Order—a genocidal communist dystopia that, Jones says, will look “just like ‘ The Hunger Games.
’ ” The architects of this apocalypse are such “globalists” as the Clintons, Bill Gates, and George Soros; their instruments are multinational institutions like the European Union, nato , and the U.N. Whereas Trump has enfeebled these organizations, Biden intends to reinvigorate them. The claim of a plot to steal the election makes sense to people who see Trump as a warrior against deep-state chicanery. Like all good conspiracy theories, it affirms and elaborates preëxisting ones. Rejecting it can require renouncing an entire world view.
Trump’s allegations of vast election fraud have been a boon for professional conspiracists. Not long ago, Jones seemed to be at risk of sliding into obsolescence. Facebook, Twitter, Apple, Spotify, and YouTube had expelled him from their platforms in 2018, after he accused the bereaved parents of children murdered at Sandy Hook of being paid actors, prompting “InfoWars” fans to harass and threaten them. The bans curtailed Jones’s reach, but a deluge of covid -19 propaganda drew millions of people to his proprietary Web sites. To some Americans, Jones’s dire warnings about the deep state and the New World Order looked prophetic, an impression that Trump’s claim of a stolen election only bolstered.
“Would you worry less about your relationship if I told you we’re about to get hit by a giant asteroid?” Cartoon by Meredith Southard Copy link to cartoon Copy link to cartoon Link copied Shop Shop After Facebook removed the Stop the Steal group that had posted the video from the T.C.F. Center, its creator, Kylie Jane Kremer, a thirty-year-old activist, conceived the November 14th rally in Washington, D.C., which became known as the Million maga March. That day, Jones joined tens of thousands of Trump supporters gathered at Freedom Plaza. Kremer, stepping behind a lectern with a microphone, promised “an incredible lineup” of speakers, after which, she said, everyone would proceed up Pennsylvania Avenue, to the Supreme Court. But, before Kremer could introduce her first guest, Jones had shouted through a bullhorn, “If the globalists think they’re gonna keep America under martial law, and they’re gonna put that Communist Chinese agent Biden in, they got another thing coming!” Hundreds of people cheered. Jones, who is all chest and no neck, pumped a fist in the air. “The march starts now!” he soon declared. His usual security detail was supplemented by about a dozen Proud Boys, who formed a protective ring around him. The national chairman of the Proud Boys, Henry (Enrique) Tarrio, walked at his side. Tarrio, the chief of staff of Latinos for Trump, is the son of Cuban immigrants who fled Fidel Castro’s revolution. Although he served time in federal prison for rebranding and relabelling stolen medical devices, he often cites his family history to portray himself and the Proud Boys in a noble light. At an event in Miami in 2019, he stood behind Trump, wearing a T-shirt that said “ roger stone did nothing wrong !” “Down with the deep state!” Jones yelled through his bullhorn. “The answer to their ‘1984’ tyranny is 1776!” As he and Tarrio continued along Pennsylvania Avenue, more and more people abandoned Kremer’s event to follow them. As we climbed toward the U.S. Capitol, I turned and peered down at a procession of Trump supporters stretching back for more than a mile. Flags waved like the sails of a bottlenecked armada. From this vantage, the Million maga March appeared to have been led by the Proud Boys and Jones. On the steps of the Supreme Court, he cried, “This is the beginning of the end of their New World Order!” Invocations of the New World Order often raise the age-old spectre of Jewish cabals, and the Stop the Steal movement has been rife with anti-Semitism. At the protest that I attended on November 7th in Pennsylvania, a speaker elicited applause with the exhortation “Do not become a cog in the zog !” The acronym stands for “Zionist-occupied government.” Among the Trump supporters was an elderly woman who gripped a walker with her left hand and a homemade “Stop the Steal” sign with her right. The first letters of “Stop” and “Steal” were stylized to resemble Nazi S.S. bolts. In videos of the shooting inside the Capitol on January 6th, amid the mob attempting to reach members of Congress, a man—subsequently identified as Robert Keith Packer—can be seen in a sweatshirt emblazoned with the words “Camp Auschwitz.” (Packer has been arrested.) On my way back down Pennsylvania Avenue on November 14th, after Jones’s speech, I fell in with a group of groypers chanting “Christian nation!” and “Emperor Trump!” I followed the young men to Freedom Plaza, where one of them read aloud an impassioned screed about “globalist scum” and the need to “strike down this foreign invasion.” When he finished, I noticed that two groypers standing near me were laughing. The response felt incongruous, until I recognized it as the juvenile thrill of transgression. One of them, his voice high with excitement, marvelled, “He just gave a fascist speech!” A few days later, Nicholas Fuentes appeared on an “InfoWars” panel with Alex Jones and other right-wing conspiracists. During the discussion, Fuentes warned of the “Great Replacement.” This is the contention that Europe and the United States are under siege from nonwhites and non-Christians, and that these groups are incompatible with Western culture, identity, and prosperity. Many white supremacists maintain that the ultimate outcome of the Great Replacement will be “white genocide.” (In Charlottesville, neo-Nazis chanted, “Jews will not replace us!”; the perpetrators of the New Zealand mosque massacre and the El Paso Walmart massacre both cited the Great Replacement in their manifestos.) “What people have to begin to realize is that if we lose this battle, and if this transition is allowed to take place, that’s it,” Fuentes said. “That’s the end.” “Submitting now will destroy you forever,” Jones agreed.
Because Fuentes and Jones characterize Democrats as an existential menace—Jones because they want to incrementally enslave humanity, Fuentes because they want to make whites a demographic minority—their fight transcends partisan politics. The same is true for the many evangelicals who have exalted Trump as a Messianic figure divinely empowered to deliver the country from satanic influences. Right-wing Catholics, for their part, have mobilized around the “church militant” movement—fostered by Stephen Bannon , Trump’s former chief strategist—which puts Trump at the forefront of a worldwide clash between Western civilization and Islamic “barbarity.” Crusader flags and patches were widespread at the Capitol insurrection.
Members of Trump’s base went to observe the tabulation of the vote in battleground states, and believed him when he attributed his decisive defeat to “rigged” machines and “massive voter fraud.” Photograph by Balazs Gardi for The New Yorker In the Senate chamber on January 6th, Jacob Chansley took off his horns and led a group prayer through a megaphone, from behind the Vice-President’s desk. The insurrectionists bowed their heads while Chansley thanked the “heavenly Father” for allowing them to enter the Capitol and “send a message” to the “tyrants, the communists, and the globalists.” Joshua Black, the Alabaman who had been shot in the face with a rubber bullet, said in his YouTube confession, “I praised the name of Jesus on the Senate floor. That was my goal. I think that was God’s goal.” While the religiously charged demonization of globalists dovetails with QAnon, religious maximalism has also gone mainstream. Under Trump, Republicans throughout the country have consistently situated American politics in the context of an eternal, cosmic struggle between good and evil. In doing so, they have rendered constitutional principles of representation, pluralism, and the separation of powers less inviolable, given the magnitude of what is at stake.
Trump played to this sensibility on June 1st, a week after George Floyd was killed. Police officers used rubber bullets, batons, tear gas, and pepper-ball grenades to violently disperse peaceful protesters in Lafayette Square so that he could walk unmolested from the White House to a church and pose for a photograph while holding a Bible. Liberals were appalled. For many of the President’s supporters, however, the image was symbolically resonant. Lafayette Square was subsequently enclosed behind a tall metal fence, which racial-justice protesters decorated with posters, converting it into a makeshift memorial to victims of police violence. On the morning of the November 14th rally, thousands of Trump supporters passed the fence on their way to Freedom Plaza. Some of them stopped to rip down posters, and by nine o’clock cardboard littered the sidewalk.
“White folks feel real emboldened these days,” Toni Sanders, a local activist, told me. Sanders had been at the square on June 1st, with her wife and her nine-year-old stepson. “He was tear-gassed,” she said. “He’s traumatized.” She had returned there the day of the march to prevent people from defacing the fence, and had already been in several confrontations. While we spoke, people carrying religious signs approached. They were affiliates of Patriot Prayer, a conservative Christian movement, based in Vancouver, Washington, whose rallies have often attracted white supremacists. Kyle Chapman, a prominent Patriot Prayer figure from California (and a felon), once headed the Fraternal Order of Alt-Knights, a “tactical defense arm” of the Proud Boys. A few days before the march, Chapman had posted a statement on social media proposing that the Proud Boys change their name to the Proud Goys, purge all “undesirables,” and “boldly address the issues of White Genocide” and “the right for White men and women to have their own countries where White interests are written into law.” The founder of Patriot Prayer, Joey Gibson, has praised Chapman as “a true patriot” and “an icon.” (He also publicly disavows racism and anti-Semitism.) In December, Gibson led the group that broke into the Oregon state capitol. “Look at them,” Sanders said as Gibson passed us, yelling about Biden being a communist. “Full of hate, and proud of it.” She shook her head. “If God were here, He would smite these motherfuckers.” Since January 6th, some Republican politicians have distanced themselves from Trump. A few, such as Romney, have denounced him. But the Republican Party’s cynical embrace of Trump’s attempted power grab all the way up to January 6th has strengthened its radical flank while sidelining moderates. Seventeen Republican-led states and a hundred and six Republican members of Congress—well over half—signed on to the Texas suit asking the Supreme Court to disenfranchise more than twenty million voters. Republican officials shared microphones with white nationalists and conspiracists at every Stop the Steal event I attended. At the Million maga March, Louie Gohmert, a congressman from Texas, spoke shortly after Alex Jones on the steps of the Supreme Court. “This is a multidimensional war that the U.S. intelligence people have used on other governments,” Gohmert said—words that might have come from Jones’s mouth. “You not only steal the vote but you use the media to convince people that they’re not really seeing what they’re seeing.” “We see!” a woman in the crowd cried.
In late December, Gohmert and other Republican legislators filed a lawsuit asking the courts to affirm Vice-President Pence’s right to unilaterally determine the results of the election. When federal judges dismissed the case, Gohmert declared on TV that the ruling had left patriots with only one form of recourse: “You gotta go to the streets and be as violent as Antifa and B.L.M.” Gohmert is a mainstay of the Tea Party insurgency that facilitated Trump’s political rise. Both that movement and Trumpism are preoccupied as much with heretical conservatives as they are with liberals. At an October rally, Trump derided rino s—Republicans in name only—as “the lowest form of human life.” After the election, any Republican who accepted Biden’s victory was similarly maligned. When Chris Krebs, a Trump appointee in charge of national cybersecurity, deemed the election “the most secure in American history,” the President fired him. Joe diGenova, Trump’s attorney, then said that Krebs “should be drawn and quartered—taken out at dawn and shot.” There was an unmistakable subtext as the mob inside the Capitol, almost entirely white, shouted, “Whose house? Our house!” Photograph by Balazs Gardi for The New Yorker As Republican officials scrambled to prove their fealty to the President, some joined Gohmert in invoking the possibility of violent rebellion. In December, the Arizona Republican Party reposted a tweet from Ali Alexander, a chief organizer of the Stop the Steal movement, that stated, “I am willing to give my life for this fight.” The Twitter account of the Republican National Committee appended the following comment to the retweet: “He is. Are you?” Alexander is a convicted felon, having pleaded guilty to property theft in 2007 and credit-card abuse in 2008. In November, he appeared on the “InfoWars” panel with Jones and Fuentes, during which he alluded to the belief that the New World Order would forcibly implant people with digital-tracking microchips. “I’m just not going to go into that world,” Alexander said. He also expressed jubilant surprise at how successful he, Jones, and Fuentes had been in recruiting mainstream Republicans to their cause: “We are the crazy ones, rushing the gates. But we are winning!” Jones, Fuentes, and Alexander were not seen rushing the gates when lives were lost at the Capitol on January 6th. Nor, for that matter, was Gohmert. Ashli Babbitt, the woman who was fatally shot, was an Air Force veteran who appears to have been indoctrinated in conspiracy theories about the election. She was killed by an officer protecting members of Congress—perhaps Gohmert among them. In her final tweet, on January 5th, Babbitt declared, “The storm is here”—a reference to a QAnon prophecy that Trump would expose and execute all his enemies. The same day that Babbitt wrote this, Alexander led crowds at Freedom Plaza in chants of “Victory or death!” During the sacking of the Capitol, he recorded a video from a rooftop, with the building in the distance behind him. “I do not denounce this,” he said.
Trump was lying when, after dispatching his followers to the Capitol, he assured them, “I’ll be with you.” But, in a sense, he was there—as were Jones, Fuentes, and Alexander. Their messaging was ubiquitous: on signs, clothes, patches, and flags, and in the way that the insurrectionists articulated what they were doing. At one point, I watched a man with a long beard and a Pittsburgh Pirates hat facing off against several policemen on the main floor of the Capitol. “I will not let this country be taken over by globalist communist scum!” he yelled, hoarse and shaking. “They want us all to be slaves! Everybody’s seen the documentation—it’s out in the open!” He could not comprehend why the officers would want to interfere in such a virtuous uprising. “You know what’s right,” he told them. Then he gestured vaguely at the rest of the rampaging mob. “Just like these people know what’s right.” After Chansley, the Q Shaman, left his note on the dais, a new group entered the Senate chamber. Milling around was a man in a black-and-yellow plaid shirt, with a bandanna over his face. Ahead of January 6th, Tarrio, the Proud Boys chairman, had released a statement announcing that his men would “turn out in record numbers” for the event—but would be “incognito.” The man in the plaid shirt was the first Proud Boy I had seen openly wearing the organization’s signature colors. At several points, however, I heard grunts of “Uhuru!,” a Proud Boys battle cry, and a group attacking a police line outside the Capitol had sung “Proud of Your Boy”—from the Broadway version of “Aladdin”—for which the organization is sardonically named. One member of the group had flashed the “O.K.” sign and shouted, “Fuck George Floyd! Fuck Breonna Taylor ! Fuck them all!” He seemed overcome with emotion, as if at last giving expression to a sentiment that he had long suppressed.
On January 4th, Tarrio had been arrested soon after his arrival at Dulles International Airport, for a destruction-of-property charge related to the December 12th event, where he’d set fire to a Black Lives Matter banner stolen from a historic Black church. (In an intersection outside Harry’s Pub, he had stood over the flames while Proud Boys chanted, “Fuck you, faggots!”) He was released shortly after his arrest but was barred from remaining in D.C. On the eve of the siege, followers of the official Proud Boys account on Parler were incensed. “Every cop involved should be executed immediately,” one user commented. “Time to resist and revolt!” another added. A third wrote, “Fuck these DC Police. Fuck those cock suckers up. Beat them down. You dont get to return to your families.” Since George Floyd’s death, demands from leftists to curb police violence have inspired a Back the Blue movement among Republicans, and most right-wing outfits present themselves as ardently pro-law enforcement. This alliance is conditional, however, and tends to collapse whenever laws intrude on conservative values and priorities. In Michigan, I saw anti-lockdown protesters ridicule officers enforcing covid -19 restrictions as “Gestapo” and “filthy rats.” When police cordoned off Black Lives Matter Plaza, Proud Boys called them “communists,” “cunts,” and “pieces of shit.” At the Capitol on January 6th, the interactions between Trump supporters and law enforcement vacillated from homicidal belligerence to borderline camaraderie—a schizophrenic dynamic that compounded the dark unreality of the situation. When a phalanx of officers at last marched into the Senate chamber, no arrests were made, and everyone was permitted to leave without questioning. As we passed through the central doors, a sergeant with a shaved head said, “Appreciate you being peaceful.” His uniform was half untucked and missing buttons, and his necktie was ripped and crooked. Beside him, another officer, who had been sprayed with a fire extinguisher, looked as if a sack of flour had been emptied on him.
A policeman loitering in the lobby escorted us down a nearby set of stairs, where we overtook an elderly woman carrying a “ trump ” tote bag. “We scared them off—that’s what we did, we scared the bastards,” she said, to no one in particular.
The man in front of me had a salt-and-pepper beard and a baseball cap with a “We the People” patch on the back. I had watched him collect papers from various desks in the Senate chamber and put them in a glossy blue folder. As police directed us to an exit, he walked out with the folder in his hand.
The afternoon was cold and blustery. Thousands of people still surrounded the building. On the north end of the Capitol, a renewed offensive was being mounted, on another entrance guarded by police. The rioters here were far more bitter and combative, for a simple reason: they were outside, and they wanted inside. They repeatedly charged the police and were repulsed with opaque clouds of tear gas and pepper spray.
“Fuck the blue!” people chanted.
“We have guns, too, motherfuckers!” one man yelled. “With a lot bigger rounds!” Another man, wearing a do-rag that said “ fuck your feelings ,” told his friend, “If we have to tool up, it’s gonna be over. It’s gonna come to that. Next week, Trump’s gonna say, ‘Come to D.C.’ And we’re coming heavy.” Later, I listened to a woman talking on her cell phone. “We need to come back with guns,” she said. “One time with guns, and then we’ll never have to do this again.” Although the only shot fired on January 6th was the one that killed Ashli Babbitt, two suspected explosive devices were found near the Capitol, and a seventy-year-old Alabama man was arrested for possessing multiple loaded weapons, ammunition, and eleven Molotov cocktails. As the sun fell, clashes with law enforcement at times descended into vicious hand-to-hand brawling. During the day, more than fifty officers were injured and fifteen hospitalized. I saw several Trump supporters beat policemen with blunt instruments. Videos show an officer being dragged down stairs by his helmet and clobbered with a pole attached to an American flag. In another, a mob crushes a young policeman in a door as he screams in agony. One officer, Brian Sicknick, a forty-two-year-old, died after being struck in the head with a fire extinguisher. Several days after the siege, Howard Liebengood, a fifty-one-year-old officer assigned to protect the Senate, committed suicide.
Right-wing extremists justify such inconsistency by assigning the epithet “oath-breaker” to anyone in uniform who executes his duties in a manner they dislike. It is not difficult to imagine how, once Trump is no longer President, his most fanatical supporters could apply this caveat to all levels of government, including local law enforcement. At the rally on December 12th, Nicholas Fuentes underscored the irreconcilability of a radical-right ethos and pro-police, pro-military patriotism: “When they go door to door mandating vaccines, when they go door to door taking your firearms, when they go door to door taking your children, who do you think it will be that’s going to do that? It’s going to be the police and the military.” During Trump’s speech on January 6th, he said, “The media is the biggest problem we have.” He went on, “It’s become the enemy of the people. . . . We gotta get them straightened out.” Several journalists were attacked during the siege. Men assaulted a Times photographer inside the Capitol, near the rotunda, as she screamed for help. After National Guard soldiers and federal agents finally arrived and expelled the Trump supporters, some members of the mob shifted their attention to television crews in a park on the east side of the building. Earlier, a man had accosted an Israeli journalist in the middle of a live broadcast, calling him a “lying Israeli” and telling him, “You are cattle today.” Now the Trump supporters surrounded teams from the Associated Press and other outlets, chasing off the reporters and smashing their equipment with bats and sticks.
There was a ritualistic atmosphere as the crowd stood in a circle around the piled-up cameras, lights, and tripods. “This is the old media,” a man said, through a megaphone. “This is what it looks like. Turn off Fox, turn off CNN.” Outside the Capitol, rioters surrounded news crews, chasing off the reporters and smashing their equipment with bats.
Photograph by Balazs Gardi for The New Yorker Another man, in a black leather jacket and wraparound sunglasses, suggested that journalists should be killed: “Start makin’ a list! Put all those names down, and we start huntin’ them down, one by one!” “Traitors to the guillotine!” “They won’t be able to walk down the streets!” The radicalization of the Republican Party has altered the world of conservative media, which is, in turn, accelerating that radicalization. On November 7th, Fox News, which has often seemed to function as a civilian branch of the Trump Administration, called the race for Biden, along with every other major network. Furious, Trump encouraged his supporters to instead watch Newsmax , whose ratings skyrocketed as a result. Newsmax hosts have dismissed covid -19 as a “scamdemic” and have speculated that Republican politicians were being infected with the virus as a form of “sabotage.” The Newsmax headliner Michelle Malkin has praised Fuentes as one of the “New Right leaders” and the groypers as “patriotic.” At the December 12th rally, I ran into the Pennsylvania Three Percent member whom I’d met in Harrisburg on November 7th. Then he had been a Fox News devotee, but since Election Day he’d discovered Newsmax. “I’d had no idea what it even was,” he told me. “Now the only thing that anyone I know watches anymore is Newsmax. They ask the hard questions.” It seems unlikely that what happened on January 6th will turn anyone who inhabits such an ecosystem against Trump. On the contrary, there are already indications that the mayhem at the Capitol will further isolate and galvanize many right-wingers. The morning after the siege, an alternative narrative, pushed by Jones and other conspiracists, went viral on Parler: the assault on the Capitol had actually been instigated by Antifa agitators impersonating Trump supporters. Mo Brooks, an Alabama congressman who led the House effort to contest the certification of the Electoral College votes, tweeted, “Evidence growing that fascist ANTIFA orchestrated Capitol attack with clever mob control tactics.” (Brooks had warmed up the crowd for Trump on January 6th, with a speech whose bellicosity far surpassed the President’s. “Today is the day American patriots start takin’ down names and kickin’ ass!” he’d hollered.) Most of the “evidence” of Antifa involvement seems to be photographs of rioters clad in black. Never mind that, in early January, Tarrio, the Proud Boys chairman, wrote on Parler, “We might dress in all BLACK for the occasion.” Or that his colleague Joe Biggs, addressing antifascist activists, added, “We are going to smell like you, move like you, and look like you.” Not long after the Brooks tweet, I got a call from a woman I’d met at previous Stop the Steal rallies. She had been unable to come to D.C., owing to a recent surgery. She asked if I could tell her what I’d seen, and if the stories about Antifa were accurate. She was upset—she did not believe that “Trump people” could have done what the media were alleging. Before I responded, she put me on speakerphone. I could hear other people in the room. We spoke for a while, and it was plain that they desperately wanted to know the truth. I did my best to convey it to them as I understood it.
Less than an hour after we got off the phone, the woman texted me a screenshot of a CNN broadcast with a news bulletin that read, “ antifa has taken responsiblitly for storming capital hill.
” The image, which had been circulating on social media, was crudely Photoshopped (and poorly spelled). “Thought you might want to see this,” she wrote.
In the year 2088, a five-hundred-pound time capsule is scheduled to be exhumed from beneath the stone slabs of Freedom Plaza. Inside an aluminum cylinder, historians will find relics honoring the legacy of Martin Luther King, Jr.
: a Bible, clerical robes, a cassette tape with King’s “I Have a Dream” speech, part of which he wrote in a nearby hotel. What will those historians know about the lasting consequences of the 2020 Presidential election, which culminated with the incumbent candidate inciting his supporters to storm the Capitol and threaten to lynch his adversaries? Will this year’s campaign against the democratic process have evolved into a durable insurgency? Something worse? On January 8th, Trump was permanently banned from Twitter.
Five days later, he became the only U.S. President in history to be impeached twice. (During the Capitol siege, the man in the hard hat withdrew from one of the Senate desks a manual, from a year ago, titled “ PROCEEDINGS OF THE UNITED STATES SENATE IN THE IMPEACHMENT TRIAL OF PRESIDENT DONALD JOHN TRUMP.
”) Although the President has finally agreed to submit to a peaceful transition of power, he has admitted no responsibility for the deadly riot. “People thought that what I said was totally appropriate,” he told reporters on January 12th.
He will not disappear. Neither will the baleful forces that he has conjured and awakened. This is why iconoclasts like Fuentes and Jones have often seemed more exultant than angry since Election Day. For them, the disappointment of Trump’s defeat has been eclipsed by the prospect of upheaval that it has brought about. As Fuentes said on the “InfoWars” panel, “This is the best thing that can happen, because it’s destroying the legitimacy of the system.” Fuentes was at the Capitol riot, though he denies going inside. On his show the next day, he called the siege “the most awe-inspiring and inspirational and incredible thing I have seen in my entire life.” At the heap of wrecked camera gear outside the Capitol, the man in the leather jacket and sunglasses declared to the crowd, “We are at war. . . . Mobilize in your own cities, your own counties. Storm your own capitol buildings. And take down every one of these corrupt motherfuckers.” Behind him, lights glowed in the rotunda. The sky darkened. At 8 p.m., Congress reconvened and resumed certifying the election. For six hours, Americans had held democracy hostage in the name of patriotism.
The storm might be here. ♦ More on the January 6th Attack Inside the chamber, Luke Mogelson captured raw, visceral footage of the siege.
Should Americans refer to the Sixth of January as a protest, an act of treason, or something else ? What the January 6th papers reveal.
How a mother of eight became one of the riot’s biggest mysteries, and a fugitive from the F.B.I.
The violence was what Donald Trump wanted.
If America is to remain a democracy, Trump must be held accountable.
Sign up for our daily newsletter to receive the best stories from The New Yorker.
More: Capitol Hill Riots Far Right Donald Trump Protests Congress Mob U.S. Senate Proud Boys 2020 Election Republicans Conspiracy Theories Right Wing QAnon Extremists Trump-Biden Transition Weekly E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q.
Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info
" |
136 | 2,021 | "Getting vaccinated is hard. It’s even harder without the internet. | MIT Technology Review" | "https://www.technologyreview.com/2021/02/03/1017245/broadband-digital-divide-senior-citizens-pandemic" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Getting vaccinated is hard. It’s even harder without the internet.
The digital divide is hurting many Americans just when they need connectivity the most. But change may require focusing on affordability, not access.
By Eileen Guo archive page Ms Tech | Unsplash Before his 190-square-foot apartment in San Francisco’s Tenderloin district was connected to the internet, Marvis Phillips depended on a friend with a laptop for his prolific letter-writing campaigns.
Phillips, a community organizer, wrote each note by hand and mailed them, then his friend typed and sent the missives, via email and online comment forms, to the city supervisors, planning commissions, statehouse officials, and Congressional representatives to whom he had been making his opinions known for over 40 years.
Phillips has lived for decades in the Alexander Residence, a 179-unit affordable housing building where internet access is, theoretically, available: he is just a few blocks from the headquarters of companies like Twitter, Uber, and Zendesk. But living on a fixed income that comes primarily from social security benefits, Phillips could not afford the costs of a broadband subscription or the device that he’d need to get connected.
Related Story “I had wanted to be online for years,” says the 65-year-old, but “I have to pay for my rent, buy my food—there were other things that were important.” For as long as the internet has existed, there has been a divide between those who have it and those who do not, with increasingly high stakes for people stuck on the wrong side of America’s “ persistent digital divide.
” That’s one reason why, from the earliest days of his presidential campaign, Joe Biden promised to make universal broadband a priority.
But Biden’s promise has taken on extra urgency as a result of the pandemic. Covid-19 has widened many inequities, including the “ homework gap ” that threatened to leave lower-income students behind as schools moved online, as well as access to health care, unemployment benefits, court appearances , and—increasingly— the covid-19 vaccine , all of which require (or are facilitated by) internet connections.
Whether Biden can succeed in bridging the gap, however, depends on how he defines the problem. Is it one that can be fixed with more infrastructure, or one that requires social programs to address affordability and adoption gaps? The hidden divide For years, the digital divide was seen as a largely rural problem , and billions of dollars have gone into expanding broadband infrastructure and funding telecom companies to reach into more remote, underserved areas. This persistent focus on the rural-urban divide has left folks like Marvis Phillips—who struggle with the affordability of internet services, not with proximity—out of the loop.
And at the start of the pandemic, the continued impact of the digital divide became starkly drawn as schools switched to online teaching.
Images of students forced to sit in restaurant parking lots to access free WiFi so they could take their classes on the internet drove home just how wide the digital divide in America remains.
The Federal Communications Commission did take some action, asking internet service providers to sign a voluntary pledge to keep services going and forgive late fees. The FCC has not released data on how many people benefited from the pledge, but it did receive hundreds of complaints that the program was not working as intended.
Five hundred pages of these complaints were released last year after a public records request from The Daily Dot.
Among them was a mother who explained that the pandemic was forcing her to make an impossible choice.
"This isn't just about the number of people who have lost internet because they can't afford it. We believe a far greater number of people can't afford internet, but are sacrificing other necessities." “I have four boys who are all in school and need the internet to do their online school work,” she wrote. Her line was disconnected despite a promise that it would not be turned off due to non-payment. “I paid my bill of $221.00 to turn my services on. It was the last money I had and now do not have money to buy groceries for the week.” Other messages spoke of the need to forgo food, diapers, and other necessities in order to keep families connected for schoolwork and jobs.
“This isn't just about the number of people who have lost internet because they can't afford it,” says Dana Floberg, policy manager of consumer advocacy organization Free Press. “We believe a far greater number of people ... can't afford internet but are sacrificing other necessities.” According to Ann Veigle, an FCC spokesperson, such complaints are passed onto providers, who are “required to respond to the FCC and consumer in writing within 30 days.” She did not respond to questions on whether the service providers have shared reports or outcomes with the FCC, how many low-income internet and phone subscribers have benefited from the pledge, or any other outcomes of the program.
The lack of data is part of a broader problem with the FCC’s approach, says Floberg, since former chairman Ajit Pai recategorized the internet from a utility, like electricity, back to a less-regulated “information service.” She sees restoring the FCC’s regulatory authority as “the linchpin” toward “equitable and universal access and affordability” of broadband internet, by increasing competition and, in turn, resulting in better service and lower prices.
Measuring the wrong things It took Marvis Phillips three months of free internet, two months of one-on-one training, and two donated iPads—upgraded during the pandemic to accommodate Zoom and telehealth calls—to get online. And since the city ordered people to stay at home to prevent the spread of the virus, Phillips says the internet has become his “lifeline.” “Loneliness and social isolation is...a social justice and poverty issue,” says Cathy Michalec, the executive director of Little Brothers-Friends of the Elderly, the nonprofit that helped Phillips connect as part of its mission to serve low-income seniors. As with other solutions to isolation—bus fare to visit a park, tickets to a museum—internet connections also require financial resources that many older adults don’t have.
Related Story There are many people like Phillips in San Francisco: according to data from the mayor’s office , 100,000 residents, including many adults over 60, still do not have home internet. Meanwhile, data from Pew Research Trust shows that, in 2019, only 59% of seniors across the country have home broadband —a figure that decreases among those with lower incomes and educational attainments, and whose primary language is not English. The US Census Bureau, meanwhile, shows that 1 in 3 households headed by someone 65 or older does not have a computer.
Prices for broadband plans in the United States average $68 per month, according to a 2020 report by the New America Foundation , compared to the $10-$15 that some studies have suggested would be actually affordable for low-income households and the $9.95/month that Phillips currently pays through a subsidized program.
It’s all evidence of how broadband policy has been chasing the wrong metric, says Gigi Sohn, a distinguished fellow at the Georgetown Law Institute for Technology Law & Policy and former counselor to Democratic FCC chairman Tom Wheeler. Rather than focusing on whether people are served by broadband infrastructure, she argues that the FCC should be measuring internet access with a simpler question: “Do people have it in their homes?” When this is taken into account, the rural-urban digital divide begins to look a little different.
According to research by John Horrigan , a senior fellow at the Technology Policy Institute, there were 20.4 million American households that did not have broadband in 2019, but the vast majority were urban: 5.1 million were in rural locations, and 15.3 million were in metro areas.
This is not to say that the internet needs of rural residents are not important, Sohn adds, but underscores the argument that focusing on infrastructure alone only solves part of the problem. Regardless of why people don’t have access, she says, “we're not where we need to be.” Broadband policies that address the adoption and affordability gaps are on the horizon. In December, Congress passed a long-awaited second coronavirus stimulus package that included $7 billion toward an emergency expansion of broadband, with almost half—roughly $3.2 billion—set aside for $50/month internet subsidies for low income households.
This is far more than the $9.25 monthly subsidy provided by the FCC’s long-running Lifeline program.
Sohn says this increase is significant—and may stick around. “Once people have it [the $50 subsidy], it becomes more difficult to take it away,” she says, “so putting that stake in the ground is critically important.” Meanwhile, changes in the senate and the White House mean there is a chance for a bill which stalled last year to get a second look. The Accessible, Affordable Internet for All Act, championed by James Clyburn, a close ally of President Biden, proposed funding for broadband buildout to underserved areas, $50 in internet subsidies, and funding to community organizations and schools to encourage adoption. It was held up in the senate, but is likely to get revisited under Democratic leadership.
“Where does the information trickle down to?” This slow progress is happening just as the need for home internet has become more acute than ever, with signups for covid-19 vaccinations hosted on websites that are difficult to navigate or downright dysfunctional , and newly available appointment slots announced on social media.
Even for those that have broadband, the process has been so confusing that, in many families, more digitally savvy grandchildren are registering on behalf of their grandparents.
“I have dealt with 10 phone calls in the last two weeks from older adults,” says Michalec. She’s receiving questions like: When are we going to get the vaccine? I've heard that you have to sign up on a website, but I don't have a cell phone or computer. What am I supposed to do? As she scrambles to find answers, Michalec is frustrated by the lack of clear communication on what existing solutions are already out there. Neither she nor any of her seniors were aware of the FCC’s subsidy programs, she says, even though they would meet the eligibility criteria.
Nor was she aware of the benefits that the most recent coronavirus stimulus package would provide, despite following the news closely. “Where does that information trickle down to?” she wonders. “How do we get an application into people’s hands?” Michalec says that she’s been looking for support from some of the large technology companies now in the neighborhood, as well as the greater Bay area. She says that she has personally written to Tim Cook at Apple, as well as Google representatives, but so far, she has had no luck.
“I’m sure they get letters like that all the time,” she says, but adds, “We don’t need the newest devices. I know…[they] have devices lying around.” Marvis Phillips, meanwhile, continues his community advocacy from his iPad. These days, his emails have homed in on the contradictions of covid-19 health orders.
“I just sent an email about having to go out to get your test, get your vaccine,” he says. “How can you ‘stay at home’ if you have to go out to do everything?” He tries to keep on top of the constant shifts in news and rules on vaccine availability, and then passes that information on to others in the community who are not as digitally connected.
He wishes that health workers could simply go door-to-door in administering vaccines, so that medically vulnerable populations—like almost everyone in his building—could truly stay protected at home.
He continues to email everyone he can think of to enact such a policy, but he is relieved, at least, that he can use the internet to access his health provider’s web portal. Eventually, he says, it will give him the alert to schedule an appointment. “As of Thursday ... still doing 75+ but that could change this coming week,” he shared over the weekend. “I check every other day or so.” He’s still waiting for the taxi voucher that he’ll be provided to go to and from the vaccine site, so when the notification pops up, Phillips hopes that he’ll be ready.
hide by Eileen Guo Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
137 | 2,017 | "This Is the Reason Ethereum Exists | MIT Technology Review" | "https://www.technologyreview.com/s/609227/this-is-the-reason-ethereum-exists" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts This Is the Reason Ethereum Exists By Mike Orcutt archive page Patrick Kyle In the beginning, there was Bitcoin. The cryptocurrency has for many become synonymous with the idea of digital money, rising to a market capitalization of nearly $100 billion. But the second-most-valuable currency, Ether, may be far more interesting than its headline-grabbing older sibling. To understand why it’s so popular, it helps to understand why the software that runs it, called Ethereum, exists in the first place.
This piece first appeared in our new twice-weekly newsletter, Chain Letter, which covers the world of blockchain and cryptocurrencies.
Sign up here – it’s free! On Halloween in 2008, someone or some group of people using the name Satoshi Nakamoto published a white paper describing a system that would rely on a “decentralized” network of computers to facilitate the peer-to-peer exchange of value (bitcoins). Those computers would verify and record every transaction in a shared, encrypted accounting ledger. Nakamoto called this ledger a “blockchain,” because it’s composed of groups of transactions called “blocks,” each one cryptographically linked to the one preceding it.
Bitcoin eventually took off, and soon people latched onto the idea that its blockchain could be used to do other things, from tracking medical data to executing complex financial transactions (see “ Why Bitcoin Could Be Much More Than a Currency ”). But its design, intended specifically for a currency, limited the range of applications it could support, and Bitcoin aficionados started brainstorming new approaches.
It was from this primordial soup that Ethereum emerged.
In a 2013 white paper , Vitalik Buterin, then just 19, laid out his plan for a blockchain system that could also facilitate all sorts of “decentralized applications.” Buterin achieved this in large part by baking a programming language into Ethereum so that people could customize it to their purposes. These new apps are based on so-called smart contracts—computer programs that execute transactions, usually involving the transfer of currency, according to stipulations agreed upon by the participants.
Imagine, for example, that you want to send your friend some cryptocurrency automatically, at a specific time. There’s a smart contract for that. More complex smart contracts even allow for the creation of entirely new cryptocurrencies. That feature is at the heart of most initial coin offerings. ( What the Hell Is an ICO? ← here’s a primer) The processing power needed to run the smart contracts comes from the computers in an open, distributed network. Those computers also verify and record transactions in the blockchain. Ether tokens, which are currently worth about $300 apiece, are the reward for these contributions. Whereas Bitcoin is the first shared global accounting ledger, Ethereum is supposed to be the first shared global computer. The technology is nascent, and there are plenty of kinks to iron out, but that’s what all the fuss is about.
hide by Mike Orcutt Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
138 | 2,011 | "What Bitcoin Is, and Why It Matters | MIT Technology Review" | "https://www.technologyreview.com/s/424091/what-bitcoin-is-and-why-it-matters" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts What Bitcoin Is, and Why It Matters Can a booming “crypto-currency” really compete with conventional cash? By Tom Simonite archive page Unlike other currencies, Bitcoin is underwritten not by a government, but by a clever cryptographic scheme.
For now, little can be bought with bitcoins, and the new currency is still a long way from competing with the dollar. But this explainer lays out what Bitcoin is, why it matters, and what needs to happen for it to succeed.
Where does Bitcoin come from? In 2008, a programmer known as Satoshi Nakamoto—a name believed to be an alias— posted a paper outlining Bitcoin’s design to a cryptography e-mail list. Then, in early 2009, he, she, or they released software that can be used to exchange bitcoins using the scheme. That software is now maintained by a volunteer open-source community coordinated by four core developers.
“Satoshi’s a bit of a mysterious figure,” says Jeff Garzik, a member of that core team and founder of Bitcoin Watch , which tracks the Bitcoin economy. “I and the other core developers have occasionally corresponded with him by e-mail, but it’s always a crapshoot as to whether he responds,” says Garzik. “That and the forum are the entirety of anyone’s experience with him.” How does Bitcoin work? Nakamoto wanted people to be able to exchange money electronically securely without the need for a third party, such as a bank or a company like PayPal. He based Bitcoin on cryptographic techniques that allow you to be sure the money you receive is genuine, even if you don’t trust the sender.
The basics Once you download and run the Bitcoin client software , it connects over the Internet to the decentralized network of all Bitcoin users and also generates a pair of unique, mathematically linked keys, which you’ll need to exchange bitcoins with any other client. One key is private and kept hidden on your computer. The other is public, and a version of it dubbed a Bitcoin address is given to other people so they can send you bitcoins. Crucially, it is practically impossible—even with the most powerful supercomputer—to work out a private key from someone’s public key. This prevents anyone from impersonating you. Your public and private keys are stored in a file that can be transferred to another computer—for example, if you upgrade.
A Bitcoin address looks something like this: 15VjRaDX9zpbA8LVnbrCAFzrVzN7ixHNsC. Stores that accept bitcoins—for example, this one, selling alpaca socks —provide you with their address so you can pay for goods.
Transferring bitcoins When you perform a transaction, your Bitcoin software performs a mathematical operation to combine the other party’s public key and your own private key with the amount of bitcoins that you want to transfer. The result of that operation is then sent out across the distributed Bitcoin network so the transaction can be verified by Bitcoin software clients not involved in the transfer.
Those clients make two checks on a transaction. One uses the public key to confirm that the true owner of the pair sent the money, by exploiting the mathematical relationship between a person’s public and private keys; the second refers to a public transaction log stored on the computer of every Bitcoin user to confirm that the person has the bitcoins to spend.
When a client verifies a transaction, it forwards the details to others in the network to check for themselves. In this way a transaction quickly reaches and is verified by every Bitcoin client that is online. Some of those clients—“miners”—also try to add the new transfer to the public transaction log, by racing to solve a cryptographic puzzle. Once one of them wins, the updated log is passed throughout the Bitcoin network. When your software receives the updated log, it knows your payment was successful.
Security The nature of the mathematics ensures that it is computationally easy to verify a transaction but practically impossible to generate fake transactions and spend bitcoins you don’t own. The existence of a public log of all transactions also provides a deterrent to money laundering, says Garzik. “You’re looking at a global public transaction register,” he says. “You can trace the history of every single Bitcoin through that log, from its creation through every transaction.” How can you obtain bitcoins? Exchanges like Mt. Gox provide a place for people to trade bitcoins for other types of currency. Some enthusiasts have also started doing work, such as designing websites, in exchange for bitcoins. This jobs board advertises contract work paying in bitcoins.
But bitcoins also need to be generated in the first place. Bitcoins are “mined” when you set your Bitcoin client to a mode that has it compete to update the public log of transactions. All the clients set to this mode race to solve a cryptographic puzzle by completing the next “block” of the shared transaction log. Winning the race to complete the next block wins you a 50-bitcoin prize. This feature exists as a way to distribute bitcoins in the currency’s early years. Eventually, new coins will not be issued this way; instead, mining will be rewarded with a small fee taken from some of the value of a verified transaction.
Mining is very computationally intensive, to the point that any computer without a powerful graphics card is unlikely to mine any bitcoins in less than a few years.
Where to spend your bitcoins There aren’t a lot of places right now. Some Bitcoin enthusiasts with their own businesses have made it possible to swap bitcoins for tea , books , or Web design (see a comprehensive list here ). But no major retailers accept the new currency yet.
If the Federal Reserve controls the dollar, who controls the Bitcoin economy? No one. The economics of the currency are fixed into the underlying protocol developed by Nakamoto.
Nakamoto’s rules specify that the number of bitcoins in circulation will grow at an ever-decreasing rate toward a maximum of 21 million. Currently there are just over six million; in 2030, there will be over 20 million bitcoins.
Nakamoto’s scheme includes one loophole, however: if more than half of the Bitcoin network’s computing power comes under the control of one entity, then the rules can change. This would prevent, for example, a criminal cartel from faking a transaction log in its own favor to dupe the rest of the community.
It is unlikely that anyone will ever obtain this kind of control. “The combined power of the network is currently equal to one of the most powerful supercomputers in the world,” says Garzik. “Satoshi’s rules are probably set in stone.” Isn’t a fixed supply of money dangerous? It’s certainly different. “Elaborate controls to make sure that currency is not produced in greater numbers is not something any other currency, like the dollar or the euro, has,” says Russ Roberts , professor of economics at George Mason University. The consequence will likely be slow and steady deflation, as the growth in circulating bitcoins declines and their value rises.
“That is considered very destructive in today’s economies, mostly because when it occurs, it is unexpected,” says Roberts. But he thinks that won’t apply in an economy where deflation is expected. “In a Bitcoin world, everyone would anticipate that, and they know what they got paid would buy more then than it would now.” Does Bitcoin threaten the dollar or other currencies? That’s unlikely. “It might have a niche as a way to pay for certain technical services,” says Roberts, adding that even limited success could allow Bitcoin to change the fate of more established currencies. “Competition is good, even between currencies—perhaps the example of Bitcoin could influence the behavior of the Federal Reserve.” Central banks the world over have freely increased the money supply of their currencies in response to the global downturn. Roberts suggests that Bitcoin could set a successful, if smaller-scale, example of how economies that forbid such intervention can also succeed.
hide by Tom Simonite Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
139 | 2,021 | "Chinese hackers are attacking Uyghurs by posing as UN Human Rights Council | MIT Technology Review" | "https://www.technologyreview.com/2021/05/27/1025443/chinese-hackers-uyghur-united-nations" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Chinese hackers posing as the UN Human Rights Council are attacking Uyghurs Chinese-speaking hackers are targeting Uyghur Muslims with fake United Nations reports and phony support organizations, according to a new report.
By Patrick Howell O'Neill archive page AP Photo/Mark Schiefelbein Chinese-speaking hackers are masquerading as the United Nations in ongoing cyber-attacks against Uyghurs, according to the cybersecurity firms Check Point and Kaspersky.
Researchers identified an attack in which hackers posing as the UN Human Rights Council send a document detailing human rights violations to Uyghur individuals. It is in fact a malicious Microsoft Word file that, once downloaded, fetches malware: the likely goal, say the two companies, is to trick high-profile Uyghurs inside China and Pakistan into opening a back door to their computers.
“We believe that these cyber-attacks are motivated by espionage, with the endgame of the operation being the installation of a back door into the computers of high-profile targets in the Uyghur community,” said Lotem Finkelstein, head of threat intelligence at Check Point, in a statement. “The attacks are designed to fingerprint infected devices, including all of [their] running programs. From what we can tell, these attacks are ongoing, and new infrastructure is being created for what look like future attacks.” Hacking is a frequently used weapon in Beijing’s arsenal, and particularly in its ongoing genocide against Ugyhurs, which uses cutting-edge surveillance both in the real world and online. Recent reporting by MIT Technology Review shed new light on another sophisticated hacking campaign that targeted members of the Muslim minority.
Related Story An attack that targeted Apple devices was used to spy on China’s Muslim minority—and US officials claim it was developed at the country’s top hacking competition.
In addition to pretending to be from the United Nations, the hackers also built a fake and malicious website for a human rights organization called the “Turkic Culture and Heritage Foundation,” according to the report. The group’s fake website offers grants—but in fact, anybody who attempts to apply for a grant is prompted to download a false “security scanner” that is in fact a back door into the target’s computer, the researchers explained.
“The attackers behind these cyber-attacks send malicious documents under the guise of the United Nations and fake human rights foundations to their targets, tricking them into installing a backdoor to the Microsoft Windows software running on their computers,” the researchers wrote. This allows the attackers to collect basic information they seek from the victim’s computer, as well as running more malware on the machine with the potential to do more damage. The researchers say they haven’t yet seen all the capabilities of this malware.
The code found in these attacks couldn’t be matched to an exact known hacking group, said the researchers, but it was found to be identical to code found on multiple Chinese-language hacking forums and may have been copied directly from there.
hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
140 | 2,021 | "How China turned a prize-winning iPhone hack against the Uyghurs | MIT Technology Review" | "https://www.technologyreview.com/2021/05/06/1024621/china-apple-spy-uyghur-hacker-tianfu" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How China turned a prize-winning iPhone hack against the Uyghurs An attack that targeted Apple devices was used to spy on China’s Muslim minority—and US officials claim it was developed at the country’s top hacking competition.
By Patrick Howell O'Neill archive page Ms Tech | Getty Beijing secretly used an award-winning iPhone hack to spy on Uyghurs The United States tracked the attack and informed Apple Tianfu Cup is a “venue for China to get zero-days,” say experts In March 2017, a group of hackers from China arrived in Vancouver with one goal: Find hidden weak spots inside the world’s most popular technologies.
Google’s Chrome browser, Microsoft’s Windows operating system, and Apple’s iPhones were all in the crosshairs. But no one was breaking the law. These were just some of the people taking part in Pwn2Own, one of the world’s most prestigious hacking competitions.
It was the 10th anniversary for Pwn2Own, a contest that draws elite hackers from around the globe with the lure of big cash prizes if they manage to exploit previously undiscovered software vulnerabilities, known as “zero-days.” Once a flaw is found, the details are handed over to the companies involved, giving them time to fix it. The hacker, meanwhile, walks away with a financial reward and eternal bragging rights.
For years, Chinese hackers were the most dominant forces at events like Pwn2Own, earning millions of dollars in prizes and establishing themselves among the elite. But in 2017, that all stopped.
One of China’s elite hacked an iPhone…. Virtually overnight, Chinese intelligence used it as a weapon against a besieged minority ethnic group, striking before Apple could fix the problem. It was a brazen act performed in broad daylight.
In an unexpected statement, the billionaire founder and CEO of the Chinese cybersecurity giant Qihoo 360—one of the most important technology firms in China—publicly criticized Chinese citizens who went overseas to take part in hacking competitions. In an interview with the Chinese news site Sina, Zhou Hongyi said that performing well in such events represented merely an “imaginary” success. Zhou warned that once Chinese hackers show off vulnerabilities at overseas competitions, they can “no longer be used.” Instead, he argued, the hackers and their knowledge should “stay in China” so that they could recognize the true importance and “strategic value” of the software vulnerabilities.
Beijing agreed. Soon, the Chinese government banned cybersecurity researchers from attending overseas hacking competitions. Just months later, a new competition popped up inside China to take the place of the international contests. The Tianfu Cup, as it was called, offered prizes that added up to over a million dollars.
The inaugural event was held in November 2018. The $200,000 top prize went to Qihoo 360 researcher Qixun Zhao, who showed off a remarkable chain of exploits that allowed him to easily and reliably take control of even the newest and most up-to-date iPhones. From a starting point within the Safari web browser, he found a weakness in the core of the iPhones operating system, its kernel. The result? A remote attacker could take over any iPhone that visited a web page containing Qixun’s malicious code. It’s the kind of hack that can potentially be sold for millions of dollars on the open market to give criminals or governments the ability to spy on large numbers of people. Qixun named it “Chaos.” Two months later, in January 2019, Apple issued an update that fixed the flaw. There was little fanfare—just a quick note of thanks to those who discovered it.
But in August of that year, Google published an extraordinary analysis into a hacking campaign it said was “exploiting iPhones en masse.” Researchers dissected five distinct exploit chains they’d spotted “in the wild.” These included the exploit that won Qixun the top prize at Tianfu, which they said had also been discovered by an unnamed “attacker.” The Google researchers pointed out similarities between the attacks they caught being used in the real world and Chaos. What their deep dive omitted, however, were the identities of the victims and the attackers: Uyghur Muslims and the Chinese government.
A campaign of oppression For the past seven years, China has committed human rights abuses against the Uyghur people and other minority groups in the Western province of Xinjiang. Well-documented aspects of the campaign include detention camps, systematic compulsory sterilization, organized torture and rape , forced labor, and an unparalleled surveillance effort. Officials in Beijing argue that China is acting to fight “terrorism and extremism,” but the United States, among other countries, has called the actions genocide.
The abuses add up to an unprecedented high-tech campaign of oppression that dominates Uyghur lives, relying in part on targeted hacking campaigns.
China’s hacking of Uyghurs is so aggressive that it is effectively global , extending far beyond the country’s own borders. It targets journalists, dissidents, and anyone who raises Beijing’s suspicions of insufficient loyalty.
Shortly after Google’s researchers noted the attacks, media reports connected the dots: the targets of the campaign that used the Chaos exploit were the Uyghur people, and the hackers were linked to the Chinese government. Apple published a rare blog post that confirmed the attack had taken place over two months: that is, the period beginning immediately after Qixun won the Tianfu Cup and stretching until Apple issued the fix.
Related Story The tech giant gave a rare statement that bristled at Google’s analysis of the novel hacking operation.
MIT Technology Review has learned that United States government surveillance independently spotted the Chaos exploit being used against Uyghurs, and informed Apple. (Both Apple and Google declined to comment on this story.) The Americans concluded that the Chinese essentially followed the “strategic value” plan laid out by Qihoo’s Zhou Hongyi; that the Tianfu Cup had generated an important hack; and that the exploit had been quickly handed over to Chinese intelligence, which then used it to spy on Uyghurs.
The US collected the full details of the exploit used to hack the Uyghurs, and it matched Tianfu’s Chaos hack, MIT Technology Review has learned. (Google’s in-depth examination later noted how structurally similar the exploits are.) The US quietly informed Apple, which had already been tracking the attack on its own and reached the same conclusion: the Tianfu hack and the Uyghur hack were one and the same. The company prioritized a difficult fix.
Qihoo 360 and Tianfu Cup did not respond to multiple requests for comment. When we contacted Qixun Zhao via Twitter, he strongly denied involvement, although he also said he couldn’t remember who came into possession of the exploit code. At first, he suggested the exploit wielded against Uyghurs was probably used “after the patch release.” On the contrary, both Google and Apple have extensively documented how this exploit was used before January 2019. He also pointed out that his ‘Chaos’ exploit shared code from other hackers. In fact, within Apple and US intelligence, the conclusion has long been that these exploits are not merely similar—they are the same. Although Qixun wrote the exploit, there is nothing to suggest he was personally involved in what happened to it after the Tianfu event (Chinese law requires citizens and organizations to provide support and assistance to the country’s intelligence agencies whenever asked.) By the time the vulnerabilities were closed, Tianfu had achieved its goal.
“The original decision to not to allow the hackers to go abroad to competitions seems to be motivated by a desire to keep discovered vulnerabilities inside of China,” says Adam Segal, an expert on Chinese cybersecurity policy at the Council for Foreign Relations. It also cut top Chinese hackers from other income sources “so they are forced into a closer connection with the state and established companies,” he says.
The incident is stark. One of China’s elite hacked an iPhone, and won public acclaim and a large amount of money for doing so. Virtually overnight, Chinese intelligence used it as a weapon against a besieged minority ethnic group, striking before Apple could fix the problem. It was a brazen act performed in broad daylight and with the knowledge that there would be no consequences to speak of.
Concerning links Today, the Tianfu Cup is heading into its third year, and it’s sponsored by some of China’s biggest tech companies: Alibaba, Baidu, and Qihoo 360 are among the organizers. But American officials and security experts are increasingly concerned about the links between those involved in the competition and the Chinese military.
Qihoo, which is valued at over $9 billion, was one of dozens of Chinese companies added to a trade blacklist by the United States in 2020 after a US Department of Commerce assessment that the company might support Chinese military activity.
Others involved in the event have also raised alarms in Washington. The Beijing company Topsec, which helps organize Tianfu, allegedly provides hacking training, services, and recruitment for the government and has employed nationalist hackers, according to US officials.
The company is linked to cyber-espionage campaigns including the 2015 hack of the US insurance giant Anthem, a connection that was accidentally exposed when hackers used the same server to try to break into a US military contractor and to host a Chinese university hacking competition.
Related Story The iPhone’s locked-down approach to security is spreading, but advanced hackers have found that higher barriers are great for avoiding capture.
Other organizers and sponsors include NSFocus, which grew directly out of the earliest Chinese nationalist hacker movement called the Green Army, and Venus Tech, a prolific Chinese military contractor that has been linked to offensive hacking.
One other Tianfu organizer, the state-owned Chinese Electronics Technology Group, has a surveillance subsidiary called Hikvision, which provides “Uyghur analytics” and facial recognition tools to the Chinese government. It was added to a US trade blacklist in 2019.
US experts say the links between the event and Chinese intelligence are clear, however.
“I think it is not only a venue for China to get zero-days but it’s also a big recruiting venue,” says Scott Henderson, an analyst on the cyber espionage team at FireEye, a major security company based in California.
Tianfu’s links to Uyghur surveillance and genocide show that getting early access to bugs can be a powerful weapon. In fact, the “ reckless ” hacking spree that Chinese groups launched against Microsoft Exchange in early 2021 bears some striking similarities.
In that case, a Taiwanese researcher uncovered the security flaws and passed them to Microsoft, which then privately shared them with security partners. But before a fix could be released, Chinese hacking groups started exploiting the flaw all around the world. Microsoft, which was forced to rush out a fix two weeks earlier than planned, is investigating the potential that the bug was leaked.
These bugs are incredibly valuable, not just in financial terms, but in their capacity to create an open window for espionage and oppression.
Google researcher Ian Beer said as much in the original report detailing the exploit chain. “I shan’t get into a discussion of whether these exploits cost $1 million, $2 million, or $20 million,” he wrote. “I will instead suggest that all of those price tags seem low for the capability to target and monitor the private activities of entire populations in real time.” hide by Patrick Howell O'Neill Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Computing What’s next for the world’s fastest supercomputers Scientists have begun running experiments on Frontier, the world’s first official exascale machine, while facilities worldwide build other machines to join the ranks.
By Sophia Chen archive page AI-powered 6G networks will reshape digital interactions The convergence of AI and communication technologies will create 6G networks that make hyperconnectivity and immersive experiences an everyday reality for consumers.
By MIT Technology Review Insights archive page The power of green computing Sustainable computing practices have the power to both infuse operational efficiencies and greatly reduce energy consumption, says Jen Huffstetler, chief product sustainability officer at Intel.
By MIT Technology Review Insights archive page How this Turing Award–winning researcher became a legendary academic advisor Theoretical computer scientist Manuel Blum has guided generations of graduate students into fruitful careers in the field.
By Sheon Han archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
141 | 2,020 | "Covid-19 “long haulers” are organizing online to study themselves | MIT Technology Review" | "https://www.technologyreview.com/2020/08/12/1006602/covid-19-long-haulers-are-organizing-online-to-study-themselves" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Covid-19 “long haulers” are organizing online to study themselves By Tanya Basu archive page Courtesy Photos (Wei, McCorkell, Lowenstein, Davis, Akrami: Francis Ferland) Gina Assaf was running in Washington, DC, on March 19 when she suddenly couldn’t take another step. “I was so out of breath I had to stop,” she says. Five days earlier, she’d hung out with a friend; within days, that friend and their partner had started showing three classic signs of covid-19: fever, cough, and shortness of breath.
Assaf had those symptoms too, and then some. By the second week, which she describes as “the scariest and hardest on my body,” her chest was burning and she was dizzy. Her friend recovered, but Assaf was still “utterly exhausted.” A full month after falling ill, she attempted to go to grocery shopping and ended up in bed for days.
She didn't initially have access to a coronavirus test, and doctors who saw her virtually suggested she was experiencing anxiety, psychosomatic illness, or maybe allergies. “I felt very alone and confused, and doctors had no answers or help for me,” says Assaf, whose symptoms persist to this day.
In those first few months, Assaf found a legion of people in situations similar to her own in a Slack support group for covid-19 patients, including hundreds who self-identified as “long-haulers,” the term most commonly used to describe those who remain sick long after being infected.
There, she noticed, long-haulers were trying to figure themselves out: Did they have similar blood types? Get tested at a certain time? Have a common geographic or demographic denominator? So Assaf, a technology design consultant, launched a channel called #research-group. A team of 23 people, led by six scientists and survey designers, began aggregating questions in a Google form. In April, they shared it within the Slack group and on other social-media groups for long-haulers like them.
In May, this group, which now calls itself Patient-Led Research for Covid-19 , released its first report.
Based on 640 responses, it provides perhaps the most in-depth look at long-haulers to date and offers a window into what life is like for certain coronavirus patients who are taking longer— much longer—to recover.
Until recently, the idea that a person could have the coronavirus for a long time was foreign. Doctors still don’t know what to do with these patients. At the beginning of the pandemic, those who got sick followed one of two paths: either they recovered or they died. Long-haulers don’t fit in either bucket.
The existence of a third path is only now being acknowledged. It wasn’t until late July that the US Centers for Disease Control published a paper recognizing that as many as one-third of coronavirus patients not sick enough to be admitted to the hospital don’t fully recover.
Zijian Chen , the medical director at Mount Sinai’s Center for Post-Covid Care in New York, says he and his colleagues noticed by late April that some patients weren’t recovering. “That is when we realized that patients will need further care,” he says.
What that care entails, however, remains fuzzy. Part of the problem is there isn’t a definition for what constitutes a long-hauler. Chen says Mount Sinai’s program includes “patients with a positive test result for covid-19 and [whose] symptoms persist for more than one month after the initial infection.” The Patient-Led Research team’s survey targeted patients who felt symptoms for longer than two weeks; importantly, some respondents who reported symptoms were not able to get tested, which would have disqualified them from Chen’s program. The CDC’s paper was based on interviews with subjects conducted 14 to 21 days after they received a positive test result.
Chen hopes to conduct clinical care and research to better understand long-haulers’ symptoms. But he says it’s difficult to devote time or personnel to the task in the midst of a pandemic.
Susannah Fox , who researches online movements within chronic-disease communities, says patient-led research groups such as the one Assaf started will increasingly command the attention of medical researchers, particularly during crises when doctors and scientists are overwhelmed.
“The future of health care and technology is being built in these patient communities,” she says, noting that many early adopters of online bulletin boards and virtual communities were people with rare or chronic diseases who wanted to meet other people like them.
Today, the Patient-Led Research team has new digital tools at its disposal that allow its members to connect and carry out their own research while isolated at home. One resource in particular—the Slack support group, which was created by a company called Body Politic —has been crucial to the team’s efforts.
When the coronavirus struck New York, Body Politic was an emerging media company based in New York City that aimed to highlight underrepresented voices. Then the pandemic hit. Within days of one another, three Body Politic employees got sick with what they all suspect was the disease. “Our priorities shifted,” says Fiona Lowenstein , founder and editor in chief, who tested positive.
The company’s first support group for covid patients wasn't on Slack, says Sabrina Bleich, Body Politic’s creative director, who was among those to fall ill. The group initially gathered followers on Instagram, but when that became too overwhelming, they started a WhatsApp chat group. Within a couple of days, though, the group had exceeded the WhatsApp group limit of 256. She says Slack “felt like the right option to house a large group of people, be adaptable as we grew, and allow for many different communities and conversation streams to occur simultaneously.” A post shared by BODY POLITIC (@wearebodypolitic) A post shared by BODY POLITIC (@wearebodypolitic) That Slack group has ballooned to more than 7,000 active members. “There was a huge group of patients who felt alone,” Lowenstein says. “They had no idea that they were not alone.” There are subgroups based on geography (“The UK group is very active,” Lowenstein says) and symptoms (neurological symptoms are a popular topic). Members are from all over the world, though Lowenstein suspects the fact that it’s on Slack might bias its participation toward those who know how to use the software.
Despite its limits, the Slack group allowed the coronavirus long-haulers in the Patient-Led Research group to find one another. It made it possible for them to coordinate their efforts and launch a study of their own symptoms. For many, the group has both provided a way to draw medical attention to their condition and served as a form of community during months of quarantine.
The organizers—mostly millennial women—have bonded through working together on this project. Assaf leads the group.
Hannah Wei , a qualitative researcher based in Canada, handles qualitative analysis; Lisa McCorkell , a policy analyst in California, has taken on data analysis; and Athena Akrami , a neuroscientist in London, provides statistical analysis. They can all name the exact moment when symptoms set in and precisely what day and time they got worse or better.
Hannah Davis , who handled data analysis and visualization, remembers when she realized she was sick. It was March 25, and she was struggling to read a text message. “We were trying to arrange a video call with a friend, but I couldn’t understand what it was saying,” she says.
She soon developed a persistent low fever and began having difficulty breathing—symptoms typical of the coronavirus. She was told to stay home and was unable to get a test. But Davis calls those issues “mild” compared with those that came later. She had a hard time reading and started to notice phantom smells. She had gastrointestinal issues, and after 103 days she developed a skin rash characteristic of covid-19.
Davis felt isolated. At the time, she was stuck in her Brooklyn apartment—alone, sick, and wishing she could connect with someone who understood what she was going through. The Body Politic Slack group was “a lifesaver,” she says. “I don’t know I could have [kept going] without it.” When I spoke to her 135 days after she initially fell ill, Davis was still sick, with daily fevers, joint pain, cognitive issues, and more. But she feels a renewed sense of purpose thanks to the Patient-Led Research team.
Many in the group were doing their own research even before they joined forces. Wei, a long-hauler who was diagnosed by X-ray and tested negative 40 days later, was frustrated at the lack of information and resources available for people like her. So she created covidhomecare.ca , which includes Google Doc templates for tracking symptoms (and a log of her own symptoms as a guide).
Wei’s expertise in survey design helped the Patient-Led Research group figure out the best way to go about studying themselves. She notes that the group’s survey results are biased—72% of respondents in the first survey are American, and the respondents are predominantly English-speaking. Seventy-six percent of respondents are white, and most are cisgender females.
Akrami, who gets noticeably breathless as she speaks, ran a statistical analysis to help the group interpret its results. “We asked about 62 different symptoms,” she says. “We invited people who have been tested or not tested and asked if they were negative or positive, then compared the symptoms.” They found that 60 of those symptoms were as likely to show up in long-haulers who tested positive as those who were tested negative, or never tested for the coronavirus. This result seems to indicate that official tallies of cases may be overlooking a large number of patients.
McCorkell, a long-hauler who tested negative, says that nearly half of survey respondents—all of whom have self-reported coronavirus symptoms—were never tested either.
Of those who were eventually tested, many were found negative but still believe they have the virus, on the basis of their own symptoms or a physician’s diagnosis. False negatives are common in coronavirus testing, particularly for people who are tested too soon or too long after being infected.
Still, the survey captured data particular to how long-haul patients experience the disease and its symptoms. “Even when controlling for the time of test, the only difference in symptoms between those who tested positive and those who tested negative is that those who tested positive reported loss of smell and loss of taste more often,” says McCorkell.
The timing of certain symptoms among long-haulers also seems to fluctuate in a kind of pattern. According to the survey, neurological and gastrointestinal symptoms tend to appear around the second week, then dip, and then rise again around the third or fourth month.
Davis was grateful when a fellow long-hauler warned her that days 90 through 120 were the hardest. “It’s crowdsourced recovery,” she says. The survey’s results suggest that neurological symptoms are common for long-haulers: nearly two-thirds of patients described debilitating dizziness, while blurry vision, trouble concentrating, and “brain fog” were also cited frequently. More than a fifth of patients described memory loss and hallucinations.
McCorkell says that with the next survey, due out in the next few weeks, the group will attempt to reach more respondents from Black, Hispanic/Latino, and indigenous communities—groups that have been hit hardest by the coronavirus. And Akrami hopes she can pull in the bigger Body Politic community to help translate survey results into other languages and disseminate the information.
But the long-haulers are now outgrowing their own group. Davis says the Patient-Led Research team is raising money to pay for more Slack users as their numbers grow. “I was lucky to have this experience that I hope is accessible for all long-haulers,” she says. Within the group, “it’s singularly most people’s resource for medical guidance.” During a time of extreme isolation and uncertainty, Davis and the other organizers are grateful for their Slack group, and to have found each other. “This support group has been one of the biggest gifts of my life,” she says.
hide by Tanya Basu Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
142 | 2,020 | "How India became the world’s leader in internet shutdowns | MIT Technology Review" | "https://www.technologyreview.com/2020/08/19/1006359/india-internet-shutdowns-blackouts-pandemic-kashmir" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How India became the world’s leader in internet shutdowns By Sonia Faleiro archive page Clashes between those who support the rights of Indian Muslims and the police, like this confrontation in Delhi in March, continued despite the nationwide lockdown.
YAWAR NAZIR/GETTY IMAGES Spring arrived, as always in the Kashmir Valley, with melting snow and blossoming chinar trees. This year, though, brought something new. On March 18, in Srinagar, the largest city in the Himalayan region of Kashmir, a man tested positive for covid-19—the first in the valley. The mayor asked everyone to stay home, but the message didn’t travel widely. Communication across Kashmir was limited, mobile-phone services were often disrupted, and internet speeds were stuck at a plodding 2G. So although some Kashmiris followed the order to shelter in place, many had no idea they were at risk. “We knew nothing about the virus,” says Omar Salim Akhtar, a urologist at the Government Medical College in Srinagar. “Even health workers were helpless. We had to ask people traveling outside Kashmir to download the medical guidelines and bring back printouts.” The Indian government had imposed a communications shutdown in Kashmir last August in an attempt to suppress dissent in the volatile region. The shutdown was total—no mobile internet, broadband, landlines, or cable TV. Akhtar was detained during a demonstration (his placard read “This is not a protest, this is a request, patients are suffering”) but released without charge. The shutdown lasted until January, making it the longest internet blackout ever seen in the democratic world.
After partly restoring internet connectivity, the government initially banned the use of social media, and several people who violated the ban by masking their location were arrested under anti-terror laws. At the time of writing, connection speeds continue to be heavily throttled.
Before the lockdown, most of India was scrambling to move online. In Kashmir, the blackout meant that switching schools—and businesses to the internet—was a nonstarter.
But as the coronavirus spread, the information blockade itself became a threat to public safety. The day after the valley’s first diagnosis, Amnesty International asked the government to restore access. “The right to health,” it said in a statement, “provides for the right to access healthcare [and] access to health-related information.” The government didn’t oblige.
India’s nationwide lockdown was still a week away, but outside Kashmir most people had no problems with internet access. They were already scrambling to move their work and classes online. In Kashmir, though, where even downloading Zoom was a struggle, switching schoolrooms or businesses to the internet was a nonstarter.
The information vacuum left people bewildered and prone to believing the swirling rumors. “On the one hand, people were saying that the virus was a plot to earn money from a vaccine and that everyone should continue visiting the mosque and attending weddings,” says Akhtar. “Others got busy drawing up wills and wanting to dig mass graves.” The Indian government claims the slow speeds, service limitations, and blackouts are necessary to maintain peace. Kashmir, a disputed region on the border of India and Pakistan, is subject to regular outbreaks of violence, and some Kashmiris who support a movement for independence use social media to organize. The government in Delhi argues that without connectivity, the independence movement will come to a halt.
Even if that were true—the movement predates social media by decades—the shutdowns also bring normal life to a standstill. After the region suffered billions of dollars’ worth of economic losses because of the August blackout, it was hard for locals to see the government’s actions as anything but a collective punishment.
Samreen Hamdani, a 30-year-old mechanical engineer, is one of those who felt that retribution. When the shutdown was imposed, she was teaching applied mathematics at a women’s polytechnic in Srinagar. Life was busy—she also ran a nonprofit to bring education to rural areas—and the days didn’t seem long enough. Then the blackout happened.
“Losing the internet is like losing the ability to talk,” Hamdani says. “It’s like losing the ability to walk.” The school canceled classes, and she had to let her nonprofit employees go. She didn’t have a plan B: her life was too closely entwined with the internet. Her once-packed days became a cycle of waking, eating, and sleeping, with little else to do or look forward to.
For years, many Indians bought the government line that internet shutdowns in Kashmir curb violence and save lives. But in 2018, instead of being limited to the volatile valley, they began taking place all over India. According to user-reported figures, there were 134 internet blackouts in more than half a dozen Indian states that year, and a further 106 of them across more than 10 states in 2019. Hundreds of millions of people were affected. That makes India, a democracy, the world leader in such shutdowns—ahead of China, Iran, and Venezuela. And it has become harder for ordinary Indians to dismiss the people affected as a threat to national security—because it’s happening to them, in their own cities, in their own homes.
At 3:50 a.m. on December 19, 2019, Kishi Arora was woken up by a text message from her mobile-phone company. The government, it said, was shutting down internet access in her neighborhood. Arora had followed the many shutdowns in Kashmir but never imagined that she would experience one in Delhi, the national capital.
Although dawn was yet to break, she immediately set to thinking about what a blackout would mean for her and her work. A hugely popular pastry chef, Arora had built her business online: she had 160,000 followers on Twitter, 17,000 on Facebook, and 24,000 on Instagram. Her team spent their days taking orders (many through social media), making food, and delivering it to customers all over the city. The text didn’t mention how long the blackout would last, and as Arora imagined the digital orders for her signature cheesecake steadily piling up, she felt her concerns pulsate like a headache.
How would she keep in touch with her mother, an ailing widow, when she was at work? Her siblings lived abroad, and the close-knit family chatted throughout the day over WhatsApp; what would they do? It was clear why the shutdown was happening: thousands of people were in the streets protesting the passage of a controversial new immigration law, the Citizenship (Amendment) Act of 2019, and things in the capital had become fractious. The CAA was a scheme to put persecuted minorities who had arrived from Bangladesh, Pakistan, and Afghanistan on a fast track to citizenship—unless they were Muslims, who had to go through the onerous normal channels.
A post shared by Kishi Arora (@kishiarora) On top of this, the government said it would start immigration checks across the entire country, even in states with little to no history of undocumented immigration, and planned to send those who could not prove they were either Indian citizens or eligible for fast-tracking into mass detention camps. In a country where many poor people don’t have documents to prove that they even exist (according to one report, only 62% of Indian children under the age of five have birth certificates), millions were at risk of failing the check.
The potential consequence for many of India’s 200 million Muslims was clear: they could become stateless people, treated like the Uighur Muslim minority in China. India is a secular republic, but Prime Minister Narendra Modi, an avowed Hindu nationalist who joined a known supremacist group when he was just eight years old, was turning it into a majoritarian Hindu state.
When protests against the CAA took off, the government turned to the tactic it had used elsewhere: shutting off the internet. There were shutdowns in India’s largest state, Uttar Pradesh; in Modi’s home state, Gujarat; and even in Karnataka, whose tech-friendly capital, Bengaluru, is known as the Silicon Valley of India.
As Arora realized the extent of the Delhi shutdown, she worried for her own security as well as for her business. The city was already notoriously unsafe for women, and as the antigovernment protests continued, the khaki-uniformed police had responded to peaceful chants and demonstrations with live rounds, tear gas, and smoke grenades. Across the country the police had already killed 25 protesters.
That day, a prominent march was planned at the historic Red Fort, where India’s prime minister traditionally hoists the flag on Independence Day. In the morning, Nikhil Pahwa, a friend of Arora’s who worked as a digital rights activist, had tweeted, “Telecom operators have confirmed to us: The Internet is being shut down in parts of Delhi. Not sure of which areas. Awaiting update.” It turned out that not all service operators had made the effort to tell users in advance. Many of the estimated 1.7 million people affected started their day in an information black hole.
The shutdown didn’t cover the entire city—only those areas with a large Muslim population. “The idea was to stop them communicating as they roamed around,” says Danish Khan, a reporter with the Economic Times newspaper. His neighborhood had experienced a blackout that morning. “They didn’t want people to mobilize quickly or share pictures and videos,” he adds.
News of what the government had surreptitiously done only spurred people on. Hundreds of protesters gathered, but many of them were immediately taken into custody. While still standing on the street trying to catch a Wi-Fi signal, Arora thought of the two young Muslim women who worked for her. Would they be safe at home without the internet, or outside where the police roamed? Sometimes, she says, it became difficult to remember that she lived in a democracy.
135 years in the making When the Indian government wants to plunge the public into a digital darkness, all it has to do is invoke one law.
The Indian Telegraph Act of 1885 gives the federal and state governments the right to “prevent the transmission of any telegraphic message or class of messages during a public emergency or in the interest of public safety.” The British created the law and found it a useful tool for stopping uprisings during the colonial era. Later, Indian governments used it to wiretap citizens, including opposition politicians and journalists. In 2017, the law was amended to specify that it allowed “the temporary suspension of telecom services.” The Software Freedom Law Center (SFLC), a Delhi-based digital rights group, says that there are two official explanations for a shutdown: public safety and public emergency. The government either claims that misinformation circulating on social media and WhatsApp is likely to cause violence, or that an ongoing violent situation can only be brought under control by closing down communications.
Stopping violence was at least sometimes the goal when shutdowns started to increase in 2018. In June that year, two tourists were murdered in the northeastern state of Assam following rumors on WhatsApp of child kidnappers on the prowl. When two more people were beaten the next day, apparently on the same suspicion, the government shut off the state’s internet to stop the rumors from spreading.
If other countries are inspired by India's extended and extensive blockades, the world could face a "continuous stream of ephemeral shutdowns that will never end." Over the next few months, similar messages and fake videos of so-called “child lifters” popped up as WhatsApp forwards in many other states. By the end of 2019, such rumors were linked to at least 70 violent incidents, according to analysis by the data journalism website IndiaSpend.
The episode highlighted a burgeoning epidemic of fake news in India, stoked by a price war in 2016 among phone operators that had slashed the cost of mobile data and brought hundreds of millions of new people online. The internet, which had been the domain of the educated and wealthy, was now everywhere: vegetable vendors streamed Bollywood films as they parceled tomatoes and onions, and auto rickshaw drivers scrolled YouTube videos while they waited for their next customer. Today Indian mobile data is the cheapest in the world, and the average social-media user spends 17 hours on the platforms each week, more than people in China.
This dramatic expansion exposed the widespread lack of information literacy. The concept of online disinformation is largely unknown to Indians outside major cities, and while WhatsApp has taken steps to limit the spread of fake news, the government continues to deal with it through shutdowns rather than attempting education, investing in computer literacy, or even just using social media to set the record straight.
And increasingly, as the shutdowns in Delhi and elsewhere show, the authorities are now using the tactic not only to curb violence but also to suppress dissent. There is no true legal recourse: the Telegraph Act doesn’t limit how long a shutdown can last, and although there is a committee that reviews such actions, it is staffed by bureaucrats and rarely diverges from the government line.
Telecom companies themselves are badly affected by shutdowns: one estimate says they lost $350,000 every hour that the internet was down during the 2019 protests. However, they offer virtually no resistance to the state. One company, Airtel, even went back and deleted tweets in which it had informed customers of the Delhi shutdown.
Even the courts respond halfheartedly. When the SFLC filed a writ arguing that the Delhi shutdown violated fundamental rights to freedom of speech and life, the case was dismissed on the grounds that the shutdown had already been lifted. In January of this year, the Supreme Court declared the Kashmir blackout illegal, but when the government switched communications back on it kept the internet throttled to unusable speeds, and faced no consequences.
Berhan Taye, a senior policy analyst at the digital rights nonprofit Access Now, says there is “a direct correlation between shutdowns and human rights violations.” In Kashmir, even now, it’s difficult to say exactly how many people were detained during the months-long blackout. The government’s own figures say there were 5,116 “preventive arrests,” but campaigners do not believe this accounts for everybody. In Uttar Pradesh, the police arrested more than 100 people in just a single day of protests in January and beat some brutally in public view. Without the internet, however, it was difficult to get the news out.
Jan Rydzak, a research analyst at the human rights nonprofit Ranking Digital Rights, says it’s important that people continue to protest government excesses. “We have to keep showing that shutdowns aren’t effective for the government’s purposes,” he says. Otherwise, he warns, they could begin to cascade. First, other democracies in the region may formalize systems to close off the internet rather than rely on broad public safety laws. Then, as such tactics spread, the balance around the world could shift. Instead of one or two blackouts globally, there could be prolonged siege-like blockades, and “a continuous stream of ephemeral shutdowns that will never end.” This year, the pandemic has slowed the rate of shutdowns—but it has not stopped them. The Indian government has already shut off the internet on 35 separate occasions, 26 of them in Kashmir. Even as the number of confirmed covid-19 cases in Jammu and Kashmir crossed 13,000 and the death toll passed 200 in mid-July, the government refused to restore 4G internet speeds. In May, the Supreme Court referred a judgment on a petition calling for the restoration of full service to a committee of government-appointed officials—in essence asking the government to decide whether or not its own actions were lawful. To no one’s surprise, the committee said the current 2G speed doesn’t “pose any hindrance to Covid-19 control measures.” Akhtar, the doctor in Srinagar, disagrees. On May 19, around two months into the pandemic, he stepped out of the operating theater and reached for his mobile phone, only to realize that he couldn’t load his emails. He immediately understood that the city was in the midst of another internet shutdown.
Usually he would call around to see if anyone knew what was going on. This time, however, even making a phone call was impossible. It turned out that security personnel had shot dead two suspected militants in downtown Srinagar, and the government had turned off all connectivity to prevent the news from circulating and protesters from gathering.
Standing in his scrubs, Akhtar had no idea when, or if, he would get back on the grid.
Since the start of the pandemic he had felt handicapped, almost entirely reliant on others to give him health-care updates. He didn’t have the latest research. Now, even his phone was useless. The world was in the middle of one deadly crisis, but faced with everyday violence, surrounded by security forces, and cut off from sources of information, it seemed to Akhtar that Kashmir was in the middle of two.
hide by Sonia Faleiro Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our September/October 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
143 | 2,019 | "What is geoengineering—and why should you care? | MIT Technology Review" | "https://www.technologyreview.com/s/614079/what-is-geoengineering-and-why-should-you-care-climate-change-harvard" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts What is geoengineering—and why should you care? By James Temple archive page Giant ash cloud from the eruption of Mount Pinatubo, 1991 towering above farms and agricultural lands in the Philippines.
USGS archives It’s becoming clear that we won’t cut carbon emissions soon enough to prevent catastrophic climate change.
But there may be ways to cool the planet more quickly and buy us a little more time to shift away from fossil fuels.
They’re known collectively as geoengineering, and though it was once a scientific taboo, a growing number of researchers are running computer simulations and proposing small-scale outdoor experiments. Even some legislators have begun discussing what role these technologies could play (see “ The growing case for geoengineering ”).
But what is geoengineering exactly? Traditionally, geoengineering has encompassed two very different things: sucking carbon dioxide out of the sky so the atmosphere will trap less heat, and reflecting more sunlight away from the planet so less heat is absorbed in the first place.
Read earlier stories in this series Harvard scientists moving ahead on plans for atmospheric geoengineering experiments The climate researchers intend to launch a high-altitude balloon that would spray a small quantity of reflective particles into the stratosphere.
How one climate scientist combats threats and misinformation from chemtrail conspiracists Harvard geoengineering researcher David Keith explains when to feed the trolls and when not to.
Geoengineering is very controversial. How can you do experiments? Harvard has some ideas.
A new committee will consider the wisdom of outdoor experiments, and may set the stage for more.
The first of these, known as “carbon removal” or “negative emissions technologies,” is something that scholars now largely agree we’ll need to do in order to avoid dangerous levels of warming (see “ One man’s two-decade quest to suck greenhouse gas out of the sky ”). Most no longer call it “geoengineering”—to avoid associating it with the second, more contentious branch, known as solar geoengineering.
This is a blanket term that includes ideas like setting up sun shields in space or dispersing microscopic particles in the air in various ways to make coastal clouds more reflective , dissipate heat-trapping cirrus clouds , or scatter sunlight in the stratosphere.
The word geoengineering suggests a planetary-scale technology. But some researchers have looked at the possibility of conducting it in localized ways as well, exploring various methods that might protect coral reefs, coastal redwoods , and ice sheets.
Where did the idea come from? It’s not a particularly new idea. In 1965, President Lyndon Johnson’s Science Advisory Committee warned it might be necessary to increase the reflectivity of the Earth to offset rising greenhouse-gas emissions. The committee went so far as to suggest sprinkling reflective particles across the oceans. (It’s revealing that in this, the first ever presidential report on the threat of climate change, the idea of cutting emissions didn’t seem worth mentioning, as author Jeff Goodell notes in How to Cool the Planet.
) But the best-known form of solar geoengineering involves spraying particles into the stratosphere, sometimes known as “stratospheric injection” or “stratospheric aerosol scattering.” (Sorry, we don’t come up with the names.) That’s in part because nature has already demonstrated it’s possible.
Most famously, the massive eruption of Mt. Pinatubo in the summer of 1991 spewed some 20 million tons of sulfur dioxide into the sky. By reflecting sunlight back into space, the particles in the stratosphere helped push global temperatures down about 0.5 °C over the next two years.
And while we don’t have precise data, huge volcanic eruptions in the distant past had similar effects. The explosion of Mount Tambora in Indonesia in 1815 was famously followed by the “Year Without a Summer” in 1816, a gloomy period that may have helped inspire the creation of two of literature’s most enduring horror creatures, vampires and Frankenstein’s monster.
Soviet climatologist Mikhail Budyko is generally credited as the first to suggest we could counteract climate change by mimicking this volcanic phenomenon. He raised the possibility of burning sulfur in the stratosphere in a 1974 book.
In the following decades, the concept occasionally popped up in research papers and at scientific conferences, but it didn’t gain much attention until the late summer of 2006, when Paul Crutzen, a Nobel Prize–winning atmospheric chemist, called for geoengineering research in an article in Climatic Change.
That was particularly significant because Crutzen had won his Nobel for research on the dangers of the growing ozone hole, and one of the known effects of sulfur dioxide is ozone depletion.
In other words, he thought climate change was such a threat that it was worth exploring a remedy he knew could pose other serious dangers.
So could geoengineering be the solution to climate change, relieving us of the hassle of cutting back on fossil fuels? No—although the idea that it does is surely why some energy executives and Republican legislators have taken an interest. But even if it works (on which more below), it’s at best a temporary stay of execution.
It does little to address other climate dangers, notably including ocean acidification, or the considerable environmental damage from extracting and burning finite fossil fuels. And greater levels of geoengineering may increase other disruptions in the climate system, so we can’t just keep doing more and more of it to offset ever rising emissions.
How is geoengineering being researched? In the years since Crutzen’s paper, more researchers have studied geoengineering, mainly using computer simulations or small lab experiments to explore whether it would really work, how it might be done, what sorts of particles could be used, and what environmental side effects it might produce.
The computer modeling consistently shows it would reduce global temperatures, sea-level rise, and certain other climate impacts. But some studies have found that high doses of certain particles might also damage the protective ozone layer, alter global precipitation patterns, and reduce crop growth in certain areas.
Others researchers have found that these risks can be reduced, if not eliminated, by using particles other than sulfur dioxide and by limiting the extent of geoengineering.
But no one would suggest we’ve arrived at the final answer on most of these questions. Researchers in the field believe we need to do a lot more modeling work to explore these issues in greater detail. And it’s also clear that simulations can only tell us so much, which is why some are proposing small outdoor experiments.
Has anybody conducted real-world geoengineering experiments? In 2009, Russian scientists conducted what is believed the be the first outdoor geoengineering experiment.
They mounted aerosol generators on a helicopter and car and sprayed particles as high as 200 meters (660 feet). The scientists claimed, in a paper published in Russian Meteorology and Hydrology, that the experiment had reduced the amount of sunlight that reached the surface.
(It’s worth noting that Yuri Izrael , a climate skeptic and scientific advisor to Vladimir Putin, was the lead author of the study as well as the editor of the journal.) One of the first attempts to conduct an experiment that was openly advertised in advance as geoengineering-related, known as the SPICE project , was ultimately scrapped. The idea was to pump particles up a pipe to a high-altitude balloon that would scatter them in the stratosphere. But the proposal prompted a public backlash, particularly after it emerged that some of the researchers had already applied for patents on the technology.
Related Story Scientists at Harvard have proposed what could be the next and most formal geoengineering experiment to date. They hope to launch a balloon equipped with propellers and sensors that would spray a tiny amount of calcium carbonate in the stratosphere. The aircraft would then fly through the plume and attempt to measure things like how broadly the particles disperse, how they interact with other gases, and how reflective they are. The team has already raised the funds, put an advisory committee in place, contracted with a balloon company, and begun development work on the necessary hardware. (See “ Geoengineering is very controversial. How can you do experiments? Harvard has some ideas.
”) Meanwhile, researchers at the University of Washington—in partnership with Xerox’s Palo Alto Research Center and other groups—have proposed small-scale experiments as part of a larger research program to learn more about the potential of “marine cloud brightening.” The idea, first floated by the British physicist John Latham in 1990, is that spraying tiny salt particles from seawater toward low-lying clouds above the sea could form additional droplets, increasing the surface area—and thus reflectivity—of the clouds. The team is currently raising funds to develop a “cloud-physics research instrument” and test it by spraying a small amount of sea-salt mist somewhere off the US Pacific Coast.
There have also been some early efforts in other areas of geoengineering, including more than a dozen so-called iron-fertilization experiments in the open ocean, according to Nature.
The concept there is that dumping iron into the water would stimulate the growth of phytoplankton, which would pull carbon dioxide out of the air. But scientists have questioned how well it really works, and what sorts of side effects it could have on ocean ecosystems. Environmental groups and others also criticized early efforts in this area, arguing that they went ahead without proper permission or scientific oversight.
Is anybody actually doing geoengineering? Researchers stress that these experiments aren’t actual geoengineering: the amounts of material involved are far too small to alter global temperatures. Indeed, despite a vast and varied array of online conspiracy theories to the contrary, feverishly spread by chemtrails truthers, nobody is conducting planetary-scale geoengineering today.
At least, nobody is on purpose. You could argue that burning massive amounts of fossil fuels is a form of geoengineering, just an inadvertent and very dumb one. And we also know that sulfur pollution from coal plants and ships has likely reduced global temperatures. Indeed, new UN rules requiring ships to emit less sulfur might actually raise temperatures slightly (see “ We’re about to kill a massive, accidental experiment in reducing global warming ”).
There’s also a long and rich history of efforts in the US and China, among other places, to seed clouds with particles to increase snow or rainfall (see “ Weather engineering in China ”). But the results are mixed, and local weather modification is a far cry from attempting to twist the knob on the entire climate system.
Isn’t geoengineering controversial? Very.
There are real concerns about conducting, researching, or even discussing geoengineering.
Critics argue that openly talking about the possibility of a technological “solution” to climate change (it’s not a solution, as explained above) will ease pressure to address the root cause of the problem: rising greenhouse-gas emissions. And some believe that moving forward with outdoor experiments is a slippery slope. It could create incentives to conduct ever bigger experiments, until we’re effectively doing geoengineering without having collectively determined to.
A technology that knows no national bounds also poses complex, if not insurmountable, geopolitical questions. Who should decide, and who should have a say in , whether we proceed with such an effort? How do you settle on a single global average temperature to aim for, since it will affect different nations in very different ways? And if we can’t settle on one, or come to a consensus on whether to deploy the technology at all, will some nation or individual do it anyway as climate catastrophes multiply? If so, could that spark conflicts, even wars ? Some argue it’s playing God to tinker with a system as complex as the climate. Or that it’s simply foolish to counteract one pollutant with another, or to try to fix a technocratic failure with a technocratic solution.
A final concern, and an indisputable one, is that modeling and experiments will only tell us so much. We can’t really know how well geoengineering will work and what the consequences will be until we actually try it—and at that point, we’re all stuck with the results.
Then why on earth is anyone considering it? Few serious people would describe themselves as geoengineering advocates.
Scientists who study it profess ambivalence and openly acknowledge it’s not the best solution to climate change. But they worry that society is locking in dangerous levels of warming and extreme weather by continuing to build power plants, vehicles, and cities that will pump out greenhouse gases for decades to come. So a growing number of academics say it would be irresponsible not to explore something that could potentially save many, many lives, as well as species and ecosystems—as long as it’s used alongside serious efforts to slash emissions.
Yes, it’s dangerous, they say—but compared to what? More dangerous than the climate-change-driven famine, flooding, fires, extinctions, and migration that we’re already beginning to see? As those effects worsen, the public and politicians may come to think that tinkering with the entire planet’s atmosphere is a risk worth taking.
hide by James Temple Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Climate change and energy Think that your plastic is being recycled? Think again.
Plastic is cheap to make and shockingly profitable. It’s everywhere. And we’re all paying the price.
By Douglas Main archive page 15 Climate Tech Companies to Watch By Amy Nordrum archive page 2023 Climate Tech Companies to Watch: Blue Frontier and its energy-efficient AC The startup's AC units suck moisture out of the air for more efficient cooling.
By Amy Nordrum archive page Oyster fight: The humble sea creature could hold the key to restoring coastal waters. Developers hate it.
Revitalizing oyster farms and wild oyster reefs could undo decades of environmental destruction on our coasts By Anna Kramer archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
144 | 2,021 | "Bill Gates: Rich nations should shift entirely to synthetic beef | MIT Technology Review" | "https://www.technologyreview.com/2021/02/14/1018296/bill-gates-climate-change-beef-trees-microsoft" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Bill Gates: Rich nations should shift entirely to synthetic beef We spoke to the Microsoft cofounder about his new book, the limits of his optimism, the tech breakthroughs and energy policies we need—and how his thinking on climate change has evolved.
By James Temple archive page John Keatley In his new book, How to Avoid a Climate Disaster , Bill Gates lays out what it will really take to eliminate the greenhouse-gas emissions driving climate change.
The Microsoft cofounder, who is now cochair of the Bill and Melinda Gates Foundation and chair of the investment fund Breakthrough Energy Ventures , sticks to his past argument that we’ll need numerous energy breakthroughs to have any hope of cleaning up all parts of the economy and the poorest parts of the world. The bulk of the book surveys the technologies needed to slash emissions in “hard to solve” sectors like steel, cement, and agriculture.
He stresses that innovation will make it cheaper and more politically feasible for every nation to cut or prevent emissions. But Gates also answers some of the criticisms that his climate prescriptions have been overly focused on “energy miracles” at the expense of aggressive government policies.
The closing chapters of the book lay out long lists of ways that nations could accelerate the shift, including high carbon prices, clean electricity standards, clean fuel standards, and far more funding for research and development. Gates calls for governments to quintuple their annual investments in clean tech, which would add up to $35 billion in the US.
Gates describes himself as an optimist, but it’s a constrained type of optimism. He dedicates an entire chapter to describing just how hard a problem climate change is to address. And while he consistently says we can develop the necessary technology and we can avoid a disaster; it’s less clear how hopeful he is that we will.
I spoke to Gates in December about his new book, the limits of his optimism, and how his thinking on climate change has evolved.
Gates is an investor either personally or through Breakthrough Energy Ventures in several of the companies he mentions below, including Beyond Meats, Carbon Engineering, Impossible Foods, Memphis Meats, and Pivot Bio. This interview has been edited for space and clarity.
Q: In the past, it seemed you would distance yourself from the policy side of climate change, which had led to some criticisms that you are overly focused on innovation. Was there a shift in your thinking, or was it a deliberate choice to lay out the policy side in your book? A: No, that’s absolutely fair. In general, if you can do innovation without having to get involved in the political issues, I always prefer that. It’s more natural for me to find a great scientist and back multiple approaches.
But the reason I smile when you say it is because in our global health work, there’s a whole decade where I’m recognizing that to have the impact we want, we’re going to have to work with both the donor governments in a very deep way and the recipient governments that actually create these primary health-care systems.
And my naïve view at the beginning had been “Hey, I’ll just create a malaria vaccine and other people will worry about getting that out into the field.” That clearly wasn’t a good idea. I realized that for a lot of these diseases, including diarrhea and pneumonia, there actually were vaccines. And it was more of a political challenge in getting the marginal pricing and the funds raised and the vaccine coverage up, not the scientific piece.
Here, there’s no doubt you need to get government policy in a huge way. Take things like clean steel: it doesn’t have other benefits. There’s no market demand for clean steel. Even carbon taxes at low costs per ton aren’t enough to get clean steel on the learning curve. You need like a $300-a-ton type of carbon tax. And so to get that sector going, you need to do some basic R&D, and you need to actually start having purchase requirements or funds set aside to pay that premium, both from government and perhaps companies and individuals as well.
But, you know, we need a lot of countries, not just a few, to engage in this.
Q: How do you feel about our chances of making real political progress, particularly in in the US, in the moment we find ourselves in? A: I am optimistic. Biden being elected is a good thing. Even more encouraging is that if you poll young voters, millennials, both who identify as Republican and Democrats, the interest in this issue is very high. And they’re the ones who will be alive when the world either is massively suffering from these problems or is not, depending on what gets done. So there is political will.
But there’s a lot of interplay [between politics and innovation]. If you try and do this with brute force, just paying the current premiums for clean technology, the economic cost is gigantic and the economic displacement is gigantic. And so I don’t believe that even a rich country will do this by brute force.
But in the near term, you may be able to get tens of billions of dollars for the innovation agenda. Republicans often like innovation.
I’m asking for something that’s like the size of the National Institutes of Health budget. I feel [it’s politically feasible] because it creates high-paying jobs and because it answers the question of—well, if the US gets rid of its 14% [of global emissions], big deal: what about the growing percent that comes from India as it’s providing basic capabilities to its citizens? I just imagine a phone call to the Indians in 2050 where you say, Please, please, build half as much shelter because of the green premium [for clean cement and steel]. And they’re like, What? We didn’t cause these emissions.
Innovation is the only way to [reduce those price premiums].
Q: You’ve said a couple of times you’re optimistic, and that’s sort of famously your position on these things. But of course, optimism is a relative term. Do you think we can realistically hold warming to or below a 2 °C increase at this point? A: That would require us to get the policy right, to get many, many countries involved, and to be lucky on quite a few of the technological advances. That’s pretty much a best case. Anything better than that is not at all realistic, and there are days when even that doesn’t seem realistic.
It’s not out of the question, but it requires awfully good progress. Even something like, do we get [an energy] storage miracle or not? We can’t make ourselves dependent on that. Batteries today can’t, within a factor of 20, store for the seasonal variation that you get [from intermittent sources like wind and solar]. We just don’t make enough batteries; it would be way too expensive. So we have to have other paths—like fission or fusion—that can give us that reliable source of electricity, which we’ll be even more dependent on than ever.
Q: In the book you cover a broad array of hard-to-solve sectors. The one I still have the hardest time with, in terms of fully addressing it, is food. The scale is massive. We’ve barely begun. We fundamentally don’t have replacements that completely eliminate the highly potent emissions from burping livestock and fertilizer. How hopeful are you about agriculture? A: There are [companies], including one in the [Breakthrough Energy Ventures] portfolio called Pivot Bio , that significantly reduce the amount of fertilizer you need. There are advances in seeds, including seeds that do what legumes do: that is, they’re able to [convert nitrogen in the soil into compounds that plants can use] biologically. But the ability to improve photosynthesis and to improve nitrogen fixation is one of the most underinvested things.
In terms of livestock, it’s very difficult. There are all the things where they feed them different food , like there’s this one compound that gives you a 20% reduction [in methane emissions]. But sadly, those bacteria [in their digestive system that produce methane] are a necessary part of breaking down the grass. And so I don’t know if there’ll be some natural approach there. I’m afraid the synthetic [protein alternatives like plant-based burgers] will be required for at least the beef thing.
Now the people like Memphis Meats who do it at a cellular level —I don’t know that that will ever be economical. But Impossible and Beyond have a road map, a quality road map and a cost road map, that makes them totally competitive.
As for scale today, they don’t represent 1% of the meat in the world, but they’re on their way. And Breakthrough Energy has four different investments in this space for making the ingredients very efficiently. So yeah, this is the one area where my optimism five years ago would have made this, steel, and cement the three hardest.
Now I’ve said I can actually see a path. But you’re right that saying to people, “You can’t have cows anymore”—talk about a politically unpopular approach to things.
Q: Do you think plant-based and lab-grown meats could be the full solution to the protein problem globally, even in poor nations? Or do you think it’s going to be some fraction because of the things you’re talking about, the cultural love of a hamburger and the way livestock is so central to economies around the world? A: For Africa and other poor countries, we’ll have to use animal genetics to dramatically raise the amount of beef per emissions for them. Weirdly, the US livestock, because they’re so productive, the emissions per pound of beef are dramatically less than emissions per pound in Africa. And as part of the [Bill and Melinda Gates] Foundation’s work, we’re taking the benefit of the African livestock, which means they can survive in heat, and crossing in the monstrous productivity both on the meat side and the milk side of the elite US beef lines.
So no, I don’t think the poorest 80 countries will be eating synthetic meat. I do think all rich countries should move to 100% synthetic beef. You can get used to the taste difference, and the claim is they’re going to make it taste even better over time. Eventually, that green premium is modest enough that you can sort of change the [behavior of] people or use regulation to totally shift the demand.
So for meat in the middle-income-and-above countries, I do think it’s possible. But it’s one of those ones where, wow, you have to track it every year and see, and the politics [are challenging]. There are all these bills that say it’s got to be called, basically, lab garbage to be sold. They don’t want us to use the beef label.
Q: You talk a lot in the book about the importance of carbon-removal technologies, like direct air capture. You also did come out and say that planting trees as a climate solution is overblown. What’s your reaction to things like the Trillion Trees Initiative and the large number of corporations announcing plans to achieve negative emissions at least in part through reforestation and offsets? A: [To offset] my own emissions, I’ve bought clean aviation fuel. I’ve paid to replace natural-gas heating in low-income housing projects with electric heat pumps—where I pay the capital cost premium and they get the benefit of the lower monthly bill. And I’ve sent money to Climeworks [a Switzerland-based company that removes carbon dioxide from the air and stores it permanently underground].
For the carbon emissions I’ve done—and I’ve gotten rid of more than what I emit—it comes out to $400 a ton.
Any of these schemes that claim to remove carbon for $5, $15, $30 a ton? Just look at it.
The idea that there are all these places where there’s plenty of good soil and plenty of good water and just accidentally, the trees didn’t grow there—and if you plant a tree there, it’s going to be there for thousands of years—[is wrong].
The lack of validity for most of that tree planting is one of those things where this movement is not an honest movement yet. It doesn’t know how to measure truth yet. There are all sorts of hokey things that allow people to use their PR budgets to buy virtue but aren’t really having the impact. And we’ll get smarter over time about what is a real offset.
So no, most of those offset things don’t stand up. The offset thing that we think will stand up is if you gather money from companies and consumers to bootstrap the market for clean steel and clean cement. Because of the learning-curve benefits there, putting your money into that, instead of on tree planting, is catalytic in nature and will make a contribution. We need some mix of government, company, and individual money to drive those markets.
Q: I do have to ask this: Microsoft is in the process of trying to eliminate its entire historic emissions, and there was a Bloomberg article that had a figure in there that I was a little surprised by. The company apparently wants to do it at $20 a ton? Do you think we can achieve reliable permanent carbon removal for $20 a ton eventually? A: Very unlikely.
I mean, if you’d asked me 10 years ago how cheap solar panels would become, I would have been wrong. That went further than anyone expected.
Science is mysterious, and saying that science can do X or can’t do X is kind of a fool’s game. In many cases, it’s done things that no one would have predicted.
But even the liquid process, which is Carbon Engineering’s approach , will have a very tough time getting to $100 a ton.
With all these things, you have capital costs and you have energy costs. So getting to $20 a ton is very unlikely. There are a lot of current offset programs that claim they’re doing that, and that needs a lot of auditing because to eliminate carbon, you have to keep it out of the atmosphere for the full 10,000-year half-life. Most people have a hard time economically costing out 10,000 years of costs. Believe me, these tree guys make sure that if it burns down, they find another magic place where no tree has ever grown, to replant.
But it’s not to say that there aren’t a few places you can plant trees, or that a few of these offset things will work, like plugging certain methane leaks—that’s a high payback. We should use regulations; we should go fund those things.
hide by James Temple Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2021 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Climate change and energy Think that your plastic is being recycled? Think again.
Plastic is cheap to make and shockingly profitable. It’s everywhere. And we’re all paying the price.
By Douglas Main archive page 15 Climate Tech Companies to Watch By Amy Nordrum archive page 2023 Climate Tech Companies to Watch: Blue Frontier and its energy-efficient AC The startup's AC units suck moisture out of the air for more efficient cooling.
By Amy Nordrum archive page Oyster fight: The humble sea creature could hold the key to restoring coastal waters. Developers hate it.
Revitalizing oyster farms and wild oyster reefs could undo decades of environmental destruction on our coasts By Anna Kramer archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
145 | 2,019 | "What is geoengineering—and why should you care? | MIT Technology Review" | "https://www.technologyreview.com/2019/08/09/615/what-is-geoengineering-and-why-should-you-care-climate-change-harvard" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts What is geoengineering—and why should you care? By James Temple archive page Giant ash cloud from the eruption of Mount Pinatubo, 1991 towering above farms and agricultural lands in the Philippines.
USGS archives It’s becoming clear that we won’t cut carbon emissions soon enough to prevent catastrophic climate change.
But there may be ways to cool the planet more quickly and buy us a little more time to shift away from fossil fuels.
They’re known collectively as geoengineering, and though it was once a scientific taboo, a growing number of researchers are running computer simulations and proposing small-scale outdoor experiments. Even some legislators have begun discussing what role these technologies could play (see “ The growing case for geoengineering ”).
But what is geoengineering exactly? Traditionally, geoengineering has encompassed two very different things: sucking carbon dioxide out of the sky so the atmosphere will trap less heat, and reflecting more sunlight away from the planet so less heat is absorbed in the first place.
Read earlier stories in this series Harvard scientists moving ahead on plans for atmospheric geoengineering experiments The climate researchers intend to launch a high-altitude balloon that would spray a small quantity of reflective particles into the stratosphere.
How one climate scientist combats threats and misinformation from chemtrail conspiracists Harvard geoengineering researcher David Keith explains when to feed the trolls and when not to.
Geoengineering is very controversial. How can you do experiments? Harvard has some ideas.
A new committee will consider the wisdom of outdoor experiments, and may set the stage for more.
The first of these, known as “carbon removal” or “negative emissions technologies,” is something that scholars now largely agree we’ll need to do in order to avoid dangerous levels of warming (see “ One man’s two-decade quest to suck greenhouse gas out of the sky ”). Most no longer call it “geoengineering”—to avoid associating it with the second, more contentious branch, known as solar geoengineering.
This is a blanket term that includes ideas like setting up sun shields in space or dispersing microscopic particles in the air in various ways to make coastal clouds more reflective , dissipate heat-trapping cirrus clouds , or scatter sunlight in the stratosphere.
The word geoengineering suggests a planetary-scale technology. But some researchers have looked at the possibility of conducting it in localized ways as well, exploring various methods that might protect coral reefs, coastal redwoods , and ice sheets.
Where did the idea come from? It’s not a particularly new idea. In 1965, President Lyndon Johnson’s Science Advisory Committee warned it might be necessary to increase the reflectivity of the Earth to offset rising greenhouse-gas emissions. The committee went so far as to suggest sprinkling reflective particles across the oceans. (It’s revealing that in this, the first ever presidential report on the threat of climate change, the idea of cutting emissions didn’t seem worth mentioning, as author Jeff Goodell notes in How to Cool the Planet.
) But the best-known form of solar geoengineering involves spraying particles into the stratosphere, sometimes known as “stratospheric injection” or “stratospheric aerosol scattering.” (Sorry, we don’t come up with the names.) That’s in part because nature has already demonstrated it’s possible.
Most famously, the massive eruption of Mt. Pinatubo in the summer of 1991 spewed some 20 million tons of sulfur dioxide into the sky. By reflecting sunlight back into space, the particles in the stratosphere helped push global temperatures down about 0.5 °C over the next two years.
And while we don’t have precise data, huge volcanic eruptions in the distant past had similar effects. The explosion of Mount Tambora in Indonesia in 1815 was famously followed by the “Year Without a Summer” in 1816, a gloomy period that may have helped inspire the creation of two of literature’s most enduring horror creatures, vampires and Frankenstein’s monster.
Soviet climatologist Mikhail Budyko is generally credited as the first to suggest we could counteract climate change by mimicking this volcanic phenomenon. He raised the possibility of burning sulfur in the stratosphere in a 1974 book.
In the following decades, the concept occasionally popped up in research papers and at scientific conferences, but it didn’t gain much attention until the late summer of 2006, when Paul Crutzen, a Nobel Prize–winning atmospheric chemist, called for geoengineering research in an article in Climatic Change.
That was particularly significant because Crutzen had won his Nobel for research on the dangers of the growing ozone hole, and one of the known effects of sulfur dioxide is ozone depletion.
In other words, he thought climate change was such a threat that it was worth exploring a remedy he knew could pose other serious dangers.
So could geoengineering be the solution to climate change, relieving us of the hassle of cutting back on fossil fuels? No—although the idea that it does is surely why some energy executives and Republican legislators have taken an interest. But even if it works (on which more below), it’s at best a temporary stay of execution.
It does little to address other climate dangers, notably including ocean acidification, or the considerable environmental damage from extracting and burning finite fossil fuels. And greater levels of geoengineering may increase other disruptions in the climate system, so we can’t just keep doing more and more of it to offset ever rising emissions.
How is geoengineering being researched? In the years since Crutzen’s paper, more researchers have studied geoengineering, mainly using computer simulations or small lab experiments to explore whether it would really work, how it might be done, what sorts of particles could be used, and what environmental side effects it might produce.
The computer modeling consistently shows it would reduce global temperatures, sea-level rise, and certain other climate impacts. But some studies have found that high doses of certain particles might also damage the protective ozone layer, alter global precipitation patterns, and reduce crop growth in certain areas.
Others researchers have found that these risks can be reduced, if not eliminated, by using particles other than sulfur dioxide and by limiting the extent of geoengineering.
But no one would suggest we’ve arrived at the final answer on most of these questions. Researchers in the field believe we need to do a lot more modeling work to explore these issues in greater detail. And it’s also clear that simulations can only tell us so much, which is why some are proposing small outdoor experiments.
Has anybody conducted real-world geoengineering experiments? In 2009, Russian scientists conducted what is believed the be the first outdoor geoengineering experiment.
They mounted aerosol generators on a helicopter and car and sprayed particles as high as 200 meters (660 feet). The scientists claimed, in a paper published in Russian Meteorology and Hydrology, that the experiment had reduced the amount of sunlight that reached the surface.
(It’s worth noting that Yuri Izrael , a climate skeptic and scientific advisor to Vladimir Putin, was the lead author of the study as well as the editor of the journal.) One of the first attempts to conduct an experiment that was openly advertised in advance as geoengineering-related, known as the SPICE project , was ultimately scrapped. The idea was to pump particles up a pipe to a high-altitude balloon that would scatter them in the stratosphere. But the proposal prompted a public backlash, particularly after it emerged that some of the researchers had already applied for patents on the technology.
Related Story Scientists at Harvard have proposed what could be the next and most formal geoengineering experiment to date. They hope to launch a balloon equipped with propellers and sensors that would spray a tiny amount of calcium carbonate in the stratosphere. The aircraft would then fly through the plume and attempt to measure things like how broadly the particles disperse, how they interact with other gases, and how reflective they are. The team has already raised the funds, put an advisory committee in place, contracted with a balloon company, and begun development work on the necessary hardware. (See “ Geoengineering is very controversial. How can you do experiments? Harvard has some ideas.
”) Meanwhile, researchers at the University of Washington—in partnership with Xerox’s Palo Alto Research Center and other groups—have proposed small-scale experiments as part of a larger research program to learn more about the potential of “marine cloud brightening.” The idea, first floated by the British physicist John Latham in 1990, is that spraying tiny salt particles from seawater toward low-lying clouds above the sea could form additional droplets, increasing the surface area—and thus reflectivity—of the clouds. The team is currently raising funds to develop a “cloud-physics research instrument” and test it by spraying a small amount of sea-salt mist somewhere off the US Pacific Coast.
There have also been some early efforts in other areas of geoengineering, including more than a dozen so-called iron-fertilization experiments in the open ocean, according to Nature.
The concept there is that dumping iron into the water would stimulate the growth of phytoplankton, which would pull carbon dioxide out of the air. But scientists have questioned how well it really works, and what sorts of side effects it could have on ocean ecosystems. Environmental groups and others also criticized early efforts in this area, arguing that they went ahead without proper permission or scientific oversight.
Is anybody actually doing geoengineering? Researchers stress that these experiments aren’t actual geoengineering: the amounts of material involved are far too small to alter global temperatures. Indeed, despite a vast and varied array of online conspiracy theories to the contrary, feverishly spread by chemtrails truthers, nobody is conducting planetary-scale geoengineering today.
At least, nobody is on purpose. You could argue that burning massive amounts of fossil fuels is a form of geoengineering, just an inadvertent and very dumb one. And we also know that sulfur pollution from coal plants and ships has likely reduced global temperatures. Indeed, new UN rules requiring ships to emit less sulfur might actually raise temperatures slightly (see “ We’re about to kill a massive, accidental experiment in reducing global warming ”).
There’s also a long and rich history of efforts in the US and China, among other places, to seed clouds with particles to increase snow or rainfall (see “ Weather engineering in China ”). But the results are mixed, and local weather modification is a far cry from attempting to twist the knob on the entire climate system.
Isn’t geoengineering controversial? Very.
There are real concerns about conducting, researching, or even discussing geoengineering.
Critics argue that openly talking about the possibility of a technological “solution” to climate change (it’s not a solution, as explained above) will ease pressure to address the root cause of the problem: rising greenhouse-gas emissions. And some believe that moving forward with outdoor experiments is a slippery slope. It could create incentives to conduct ever bigger experiments, until we’re effectively doing geoengineering without having collectively determined to.
A technology that knows no national bounds also poses complex, if not insurmountable, geopolitical questions. Who should decide, and who should have a say in , whether we proceed with such an effort? How do you settle on a single global average temperature to aim for, since it will affect different nations in very different ways? And if we can’t settle on one, or come to a consensus on whether to deploy the technology at all, will some nation or individual do it anyway as climate catastrophes multiply? If so, could that spark conflicts, even wars ? Some argue it’s playing God to tinker with a system as complex as the climate. Or that it’s simply foolish to counteract one pollutant with another, or to try to fix a technocratic failure with a technocratic solution.
A final concern, and an indisputable one, is that modeling and experiments will only tell us so much. We can’t really know how well geoengineering will work and what the consequences will be until we actually try it—and at that point, we’re all stuck with the results.
Then why on earth is anyone considering it? Few serious people would describe themselves as geoengineering advocates.
Scientists who study it profess ambivalence and openly acknowledge it’s not the best solution to climate change. But they worry that society is locking in dangerous levels of warming and extreme weather by continuing to build power plants, vehicles, and cities that will pump out greenhouse gases for decades to come. So a growing number of academics say it would be irresponsible not to explore something that could potentially save many, many lives, as well as species and ecosystems—as long as it’s used alongside serious efforts to slash emissions.
Yes, it’s dangerous, they say—but compared to what? More dangerous than the climate-change-driven famine, flooding, fires, extinctions, and migration that we’re already beginning to see? As those effects worsen, the public and politicians may come to think that tinkering with the entire planet’s atmosphere is a risk worth taking.
hide by James Temple Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Climate change and energy Think that your plastic is being recycled? Think again.
Plastic is cheap to make and shockingly profitable. It’s everywhere. And we’re all paying the price.
By Douglas Main archive page 15 Climate Tech Companies to Watch By Amy Nordrum archive page 2023 Climate Tech Companies to Watch: Blue Frontier and its energy-efficient AC The startup's AC units suck moisture out of the air for more efficient cooling.
By Amy Nordrum archive page Oyster fight: The humble sea creature could hold the key to restoring coastal waters. Developers hate it.
Revitalizing oyster farms and wild oyster reefs could undo decades of environmental destruction on our coasts By Anna Kramer archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
146 | 2,019 | "Geoengineering is very controversial. How can you do experiments? Harvard has some ideas. | MIT Technology Review" | "https://www.technologyreview.com/2019/07/29/133999/geoengineering-experiment-harvard-creates-governance-committee-climate-change" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Geoengineering is very controversial. How can you do experiments? Harvard has some ideas.
By James Temple archive page Early illustration of the SCoPEx propelled balloon.
Rendering edited by MIT Technology Review For years, several Harvard climate scientists have been preparing to launch a balloon capable of spraying reflective particles into the atmosphere, in the hopes of learning more about our ability to counteract global warming. (See “ Harvard scientists moving ahead on plans for atmospheric geoengineering experiments.
”) A prestigious university forging ahead with an outdoor experiment is a major milestone for the field, known as geoengineering.
But it’s fraught with controversy. Critics fear such a step will lend scientific legitimacy to the idea that we could turn the dial on Earth’s climate. And they fret that even doing experiments is starting down a slippery slope toward creating a tool of incredible power.
Despite the critics, Harvard will take a significant step forward on Monday, as the university announces the formation of a committee to ensure that researchers take appropriate steps to limit health and environmental risks, seek and incorporate outside input, and operate in a transparent manner.
It’s a move that could create a template for how geoengineering research is conducted going forward, and perhaps pave the way for more experiments to follow.
At least one reason Harvard had to take the unusual step of creating an advisory committee was that there isn’t a US-government-funded research program in this area, or any public oversight body set up to weigh the particularly complex questions surrounding such a proposal.
Louise Bedsworth , previously a climate advisor to former California governor Jerry Brown and executive director of the California Strategic Growth Council, will serve as chair of the committee.
“The Advisory Committee will develop and implement a framework to ensure that the SCoPEx project is conducted in a transparent, credible, and legitimate manner,” she said in a statement. “This will include establishing expectations and means to hear from multiple perspectives, voices, and stakeholders.” Committee member Katharine Mach, director of the Stanford Environment Assessment Facility, said in an interview that the committee hopes to create a replicable model that other institutions or nations can employ to review additional research in this realm. She stressed that it’s early in the process, but they intend to go beyond a scientific review of environmental and safety risks, exploring broader questions such as whether pursuing research into such a technology could ease pressure to cut climate emissions.
Mach said the committee may ultimately recommend that the proposal be altered, delayed, or canceled, and her understanding is that the research team will treat such guidance with the “utmost seriousness” and “respond in a public way.” But some think that by creating the committee, the university is rushing ahead of the public and political debate on this issue.
“It’s an extremely high-profile institution that’s decided they don’t want to wait for the regulatory regimes to greenlight this,” says Wil Burns, co-director of the Institute for Carbon Removal Law and Policy at American University.
From an engineering standpoint, the team could be ready for an initial test flight within about six months. The current plan is to launch from a site somewhere in New Mexico. The scientists, however, have said they won’t pursue the experiment until the committee completes its review and will heed a determination that they should stop.
The need for real-world observations The basic idea behind what’s known as solar geoengineering is that we could use planes, balloons, or even very long hoses to disperse certain particles into the atmosphere, where they could reflect enough sunlight back into space to moderately cool the planet.
Most of the research to date has been conducted using software climate simulations or experiments in the lab. While the models show that the technique will lower temperatures, some have found it might unleash unintended environmental impacts, such as altering monsoon patterns and food production, depending on how it's done.
Only two known experiments that could be seen as related to solar geoengineering have been carried out in the open air to date. Researchers at the University of California, San Diego, sprayed smoke and salt particles off the coast of California in 2011, and scientists in Russia dispersed aerosols from a helicopter and car in 2009.
Plans for a proposed outdoor experiment in the United Kingdom, known as the SPICE project , were dropped in 2012, amid public criticism and conflict-of-interest accusations.
The Harvard experiment, first proposed in a 2014 paper , will launch a scientific balloon equipped with propellers and sensors around 20 kilometers (12 miles) above Earth. The aircraft would release between 100 grams and 2 kilograms of sub-micrometer-size particles of calcium carbonate, a substance naturally found in shells and limestone, in a roughly kilometer-long plume.
The balloon would then fly through the plume, enabling the sensors to measure things such as how broadly the particles disperse, how they interact with other compounds in the atmosphere, and how reflective they are.
The researchers hope these observations could help assess and refine climate simulations and otherwise inform the ongoing debate over the feasibility and risks of various approaches to geoengineering.
“If anything, I’m concerned that the current climate models make solar geoengineering look too good,” Frank Keutsch, a professor of chemistry and the project’s principal investigator, said in a statement. “If we want to be able to predict how large-scale geoengineering would disrupt the ozone layer, or the exchange of air between the troposphere and stratosphere, we need more real-world observations.” The project is being funded through Harvard grants to the professors involved and the university’s Solar Geoengineering Research Program, a multidisciplinary effort to study feasibility, risks, ethics, and governance issues. The organization has raised more than $16 million from Microsoft cofounder Bill Gates, the Hewlett Foundation, the Alfred P. Sloan Foundation, and other philanthropic groups and individuals.
The researchers stress that the experiment doesn’t pose any significant health or environmental hazards and doesn’t constitute geoengineering itself, as the amount of material involved won’t be anywhere close to the level needed to measurably alter temperatures. Indeed, it would represent a fraction of the particles released in a standard commercial flight, and the materials would be so dilute once they reached the surface they wouldn’t be detectable, the scientists say.
Slippery slope But there are concerns with the way the Harvard team is moving ahead.
“It doesn’t pose a physical risk, but it does pose a considerable social and political risk in being the first step towards development of actual technology for deployment,” Raymond Pierrehumbert, a physics professor at the University of Oxford, has said of the experiment.
“There would be some limited scientific payback from such a small-scale experiment, but it is mostly a stunt to break the ice and get people used to the idea of field trials.” Another question is whether the new committee is adequately independent, given Harvard’s involvement in the first step of the selection process. The university’s dean of engineering and vice provost for research created an external search committee , made up of three individuals from outside the university, to select the chair of the advisory panel. Bedsworth, in turn, chose the rest of the members.
A number of earlier research papers have argued for the creation of government-based advisory boards to oversee geoengineering research, similar to boards that national science bodies have created to weigh ethical and safety concerns around human genome editing or recombinant DNA technologies. Government-created committees help counter the self-selection issue and ensure that the body is at least indirectly accountable to the public.
To some, the fact that government bodies haven’t yet set up such a group, or provided research funds for geoengineering, may mean there isn’t a sufficient public or political consensus on moving ahead with experiments. “Private funding subverts all that, and the question is: Is that a good or bad thing?” says Jane Flegal, an adjunct faculty member at Arizona State University’s School for the Future of Innovation in Society.
The counterargument is that the US political system is effectively broken on the topic of global warming. The inability to raise public funds for research—or pass strict legislation, for that matter—has little to do with the merits of the science, or the importance of the issue, and everything to do with the poisoned politics of climate change, says Jane Long, a former associate director at Lawrence Livermore National Laboratory, who served on the search committee.
“We’re so dysfunctional from a political perspective,” says Long, who pushed early on for the researchers to create a governance board. “I don’t know how you can draw the conclusion that we’ve gotten a democratic signal that we shouldn’t do this research.” The committee is made up of a mix of social scientists and legal and technical experts, including Michael Gerrard, a law professor at Columbia; Shuchi Talati, a fellow at the Union of Concerned Scientists; Robert Lempert, a principal researcher at RAND; and Raj Pandya, director of Thriving Earth Exchange.
But it doesn’t include any representatives of the public—say, from New Mexico, where the experiment is likely to occur—or, Burns notes, any outspoken geoengineering critics.
It’s also notable that everyone is based in the US. Flegal has previously criticized proponents of geoengineering research for failing to call on enough voices from developing nations, even as they argue that the tools could be especially important in helping to address the disproportionate impact of climate change on the global poor.
Harvard professor David Keith, one of the main figures behind the experiment, acknowledged that there are reasonable concerns about independence. But he said Harvard made a good-faith effort to create a committee several layers removed from the researchers. He adds that it’s not the only form of oversight, noting that the project will also have to pass muster with Harvard’s safety committee, Federal Aviation Administration regulations, and provisions of the National Environmental Policy Act.
Keith also questioned the assumption that public funding from a federal science body would trigger stricter oversight, noting that such proposals are generally evaluated for safety and environmental impact, not the intent of the research, which is the real issue complicating this experiment.
Risks of a backlash? Douglas MacMartin, a senior research associate in mechanical and aerospace engineering at Cornell who focuses on geoengineering, believes the experiment could provide some important scientific information about the behavior and chemistry of calcium carbonate in the stratosphere. It may also help answer some basic questions about how hard or easy it will be to disperse a plume of particles and monitor their behavior.
But he says it isn’t obvious whether a project like this is the highest priority for a field with tightly limited funding.
In a paper published in Proceedings of the National Academy of Sciences earlier this year, he and a colleague noted that scientists have still barely scratched the surface of what we can learn from computer simulations. MacMartin says it would make sense to first focus on figuring out which of the uncertainties in existing models we most need to address to better understand geoengineering, and use those questions to determine the most important and achievable small-scale outdoor experiments.
He adds that moving too quickly into the real world could create the risk of a public backlash (see “ How one climate scientist combats threats and misinformation from chemtrail conspiracists ”). MacMartin says it’s important that Harvard is taking the governance questions seriously, but that waiting for a broader federal research program could also allay some of the concerns.
Keith agrees the field needs to do a lot more modeling work, but he argues it’s crucial to test simulations with direct observations. Otherwise you can make mistakes, build upon them, and wind up widely divorced from reality.
He adds it’s possible an experiment could create a backlash, but it’s also conceivable that it could encourage people to take global warming more seriously, and that it’s impossible to know at this stage. An earlier Yale study found that people exposed to information about geoengineering became more concerned about the dangers of climate change.
Ultimately, Keith says, it’s important to move the science forward, because there’s a real chance geoengineering could substantially reduce climate risks in the coming decades. So we’ll want to understand as clearly as possible what they can do, what their limits are, and what sorts of risks they could pose.
hide by James Temple Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Climate change and energy Think that your plastic is being recycled? Think again.
Plastic is cheap to make and shockingly profitable. It’s everywhere. And we’re all paying the price.
By Douglas Main archive page 15 Climate Tech Companies to Watch By Amy Nordrum archive page 2023 Climate Tech Companies to Watch: Blue Frontier and its energy-efficient AC The startup's AC units suck moisture out of the air for more efficient cooling.
By Amy Nordrum archive page Oyster fight: The humble sea creature could hold the key to restoring coastal waters. Developers hate it.
Revitalizing oyster farms and wild oyster reefs could undo decades of environmental destruction on our coasts By Anna Kramer archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
147 | 2,017 | "Harvard Scientists Moving Ahead on Plans for Atmospheric Geoengineering Experiments | MIT Technology Review" | "https://www.technologyreview.com/2017/03/24/153028/harvard-scientists-moving-ahead-on-plans-for-atmospheric-geoengineering-experiments" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Harvard Scientists Moving Ahead on Plans for Atmospheric Geoengineering Experiments By James Temple archive page A pair of Harvard climate scientists are preparing small-scale atmospheric experiments that could offer insights into the feasibility and risks of deliberately altering the climate to ease global warming.
They would be among the earliest official geoengineering -related experiments conducted outside of a controlled laboratory or computer model, underscoring the growing sense of urgency among scientists to begin seriously studying the possibility as the threat of climate change mounts.
Sometime next year, Harvard professors David Keith and Frank Keutsch hope to launch a high-altitude balloon, tethered to a gondola equipped with propellers and sensors, from a site in Tucson, Arizona. After initial engineering tests, the balloon would spray a fine mist of materials such as sulfur dioxide, alumina, or calcium carbonate into the stratosphere. The sensors would then measure the reflectivity of the particles, the degree to which they disperse or coalesce, and the way they interact with other compounds in the atmosphere.
The researchers first proposed these balloon experiments in a 2014 paper.
But at a geoengineering conference in Washington, D.C., on Friday, Keith said they have begun engineering design work with Arizona test balloon company World View Enterprises. They’ve also started discussions about the appropriate governance structure for such an experiment, and they plan to set up an independent body to review their proposals.
“We would like to have the first flights next year,” he said at the Forum on U.S. Solar Geoengineering Research, held at the Carnegie Endowment for International Peace.
In an earlier interview with MIT Technology Review , Keith stressed that the experiments would not be a binary test of geoengineering itself. But they should provide useful information about the proposed method that he has closely studied, known as solar radiation management.
The basic idea is that spraying certain types of particles into the stratosphere could help reflect more heat back into space. Scientists believe it could work because nature already does it. Large volcanic eruptions in the past have blasted tens of millions of tons of sulfur dioxide into the sky, which contributed to lower global temperatures in subsequent months.
What’s less clear is how precisely the technique could control worldwide temperatures, what materials would work best, and what the environmental side effects might be. Notably, previous volcanic eruptions have also decreased precipitation levels in parts of the world, and sulfur dioxide is known to deplete the protective ozone layer.
Keith has previously used computer modeling to explore the possibility of using other materials that may have a neutral impact on ozone, including diamond dust and alumina. Late last year, he, Keutsch, and others published a paper that found using calcite, a mineral made up of calcium carbonate, “may cool the planet while simultaneously repairing the ozone layer.” The balloon tests could provide additional insight into how these chemicals actually interact with precursors to ozone in the real world and offer additional information that could help refine their understanding of solar geoengineering, he says: “You have to go measure things in the real world because nature surprises you.” Keith stresses that it’s too early to say whether any geoengineering technologies should ever be deployed. But he has argued for years that research should move ahead to better understand their capabilities and dangers, because it’s possible they could significantly reduce the risks of climate change. He stressed that the experiments would have negligible environment impacts, as they will involve no more than a kilogram of materials.
Funding for the initial experiments would come from grants that Harvard provided Keith and Keutsch as new professors. Additional funds may come from Harvard’s Solar Geoengineering Research Program, a multidisciplinary effort launching this spring to study feasibility, risks, ethics, and governance issues surrounding geoengineering. As of press time, it had raised more than $7 million from Microsoft cofounder Bill Gates, the Hewlett Foundation, the Alfred P. Sloan Foundation, Harvard-internal funds, and other philanthropists.
Geoengineering critics argue that the climate system is too complex to meddle with, that the environmental risks are too high, or that even talking about technological “fixes” could ease pressure to cut greenhouse gas emissions.
Only two known experiments have been carried out in the open air to date that could be considered geoengineering-related: University of California, San Diego, researchers sprayed smoke and salt particles off the coast of California as part of the E-PEACE experiment in 2011 , and scientists in Russia dispersed aerosols from a helicopter and car in 2009. The so called SPICE experiment in the United Kingdom was quickly scuttled in 2012 , following public criticism and conflict of interest accusations after several of the scientists applied for a related patent.
In an earlier interview, Jane Long, a former associate director at Lawrence Livermore National Laboratory, stressed that researchers moving forward with geoengineering experiments need to go to great lengths to ensure proper public notification, opportunities for input, and appropriate oversight, particularly if they’re relying on private funds. But she said it’s time to begin seriously studying the technology’s potential given the growing dangers of climate change.
“We should have started a decade ago,” she said. “It’s critical to know as much as we can as soon as we can.” hide by James Temple Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Climate change and energy Think that your plastic is being recycled? Think again.
Plastic is cheap to make and shockingly profitable. It’s everywhere. And we’re all paying the price.
By Douglas Main archive page 15 Climate Tech Companies to Watch By Amy Nordrum archive page 2023 Climate Tech Companies to Watch: Blue Frontier and its energy-efficient AC The startup's AC units suck moisture out of the air for more efficient cooling.
By Amy Nordrum archive page Oyster fight: The humble sea creature could hold the key to restoring coastal waters. Developers hate it.
Revitalizing oyster farms and wild oyster reefs could undo decades of environmental destruction on our coasts By Anna Kramer archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
148 | 2,019 | "I asked my students to turn in their cell phones and write about living without them | MIT Technology Review" | "https://www.technologyreview.com/2019/12/26/131179/teenagers-without-cell-phones" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts I asked my students to turn in their cell phones and write about living without them By Ron Srigley archive page conceptual illustration of a mans face being obscured by his phone Selman design A few years ago, I performed an experiment in a philosophy class I was teaching. My students had failed a midterm test rather badly. I had a hunch that their pervasive use of cell phones and laptops in class was partly responsible. So I asked them what they thought had gone wrong. After a few moments of silence, a young woman put up her hand and said: “We don’t understand what the books say, sir. We don’t understand the words.” I looked around the class and saw guileless heads pensively nodding in agreement.
I extemporized a solution: I offered them extra credit if they would give me their phones for nine days and write about living without them. Twelve students—about a third of the class—took me up on the offer. What they wrote was remarkable, and remarkably consistent. These university students, given the chance to say what they felt, didn’t gracefully submit to the tech industry and its devices.
The usual industry and education narrative about cell phones, social media, and digital technology generally is that they build community, foster communication, and increase efficiency, thus improving our lives. Mark Zuckerberg’s recent reformulation of Facebook’s mission statement is typical: the company aims to “give people the power to build community and bring the world closer together.” Without their phones, most of my students initially felt lost, disoriented, frustrated, and even frightened. That seemed to support the industry narrative: look how disconnected and lonely you’ll be without our technology. But after just two weeks, the majority began to think that their cell phones were in fact limiting their relationships with other people, compromising their own lives, and somehow cutting them off from the “real” world. Here is some of what they said.
“You must be weird or something” “Believe it or not, I had to walk up to a stranger and ask what time it was. It honestly took me a lot of guts and confidence to ask someone,” Janet wrote. (Her name, like the others here, is a pseudonym.) She describes the attitude she was up against: “Why do you need to ask me the time? Everyone has a cell phone. You must be weird or something.” Emily went even further. Simply walking by strangers “in the hallway or when I passed them on the street” caused almost all of them to take out a phone “right before I could gain eye contact with them.” To these young people, direct, unmediated human contact was experienced as ill-mannered at best and strange at worst. James: “One of the worst and most common things people do nowadays is pull out their cell phone and use it while in a face-to-face conversation. This action is very rude and unacceptable, but yet again, I find myself guilty of this sometimes because it is the norm.” Emily noticed that “a lot of people used their cell phones when they felt they were in an awkward situation, for an example [sic] being at a party while no one was speaking to them.” Without their phones, most of my students initially felt lost, but after just two weeks the majority began to think that their cell phones were in fact limiting their relationships with other people.
The price of this protection from awkward moments is the loss of human relationships, a consequence that almost all the students identified and lamented. Without his phone, James said, he found himself forced to look others in the eye and engage in conversation. Stewart put a moral spin on it. “Being forced to have [real relations with people] obviously made me a better person because each time it happened I learned how to deal with the situation better, other than sticking my face in a phone.” Ten of the 12 students said their phones were compromising their ability to have such relationships.
Virtually all the students admitted that ease of communication was one of the genuine benefits of their phones. However, eight out of 12 said they were genuinely relieved not to have to answer the usual flood of texts and social-media posts. Peter: “I have to admit, it was pretty nice without the phone all week. Didn’t have to hear the fucking thing ring or vibrate once, and didn’t feel bad not answering phone calls because there were none to ignore.” Indeed, the language they used indicated that they experienced this activity almost as a type of harassment. “It felt so free without one and it was nice knowing no one could bother me when I didn’t want to be bothered,” wrote William. Emily said that she found herself “sleeping more peacefully after the first two nights of attempting to sleep right away when the lights got shut off.” Several students went further and claimed that communication with others was in fact easier and more efficient without their phones. Stewart: “Actually I got things done much quicker without the cell because instead of waiting for a response from someone (that you don’t even know if they read your message or not) you just called them [from a land line], either got an answer or didn’t, and moved on to the next thing.” Technologists assert that their instruments make us more productive. But for the students, phones had the opposite effect. “Writing a paper and not having a phone boosted productivity at least twice as much,” Elliott claimed. “You are concentrated on one task and not worrying about anything else. Studying for a test was much easier as well because I was not distracted by the phone at all.” Stewart found he could “sit down and actually focus on writing a paper.” He added, “Because I was able to give it 100% of my attention, not only was the final product better than it would have been, I was also able to complete it much quicker.” Even Janet, who missed her phone more than most, admitted, “One positive thing that came out of not having a cell phone was that I found myself more productive and I was more apt to pay attention in class.” Some students felt not only distracted by their phones, but morally compromised. Kate: “Having a cell phone has actually affected my personal code of morals and this scares me … I regret to admit that I have texted in class this year, something I swore to myself in high school that I would never do … I am disappointed in myself now that I see how much I have come to depend on technology … I start to wonder if it has affected who I am as a person, and then I remember that it already has.” And James, though he says we must continue to develop our technology, said that “what many people forget is that it is vital for us not to lose our fundamental values along the way.” Other students were worried that their cell-phone addiction was depriving them of a relationship to the world. Listen to James: “It is almost like the earth stood still and I actually looked around and cared about current events ... This experiment has made many things clear to me and one thing is for sure, I am going to cut back the time I am on my cell phone substantially.” Stewart said he began to see how things “really work” once he was without his phone: “One big thing I picked up on while doing this assignment is how much more engaged I was in the world around me … I noticed that the majority of people were disengaged … There is all this potential for conversation, interaction, and learning from one another but we’re too distracted by the screens … to partake in the real events around us.” In parentis, loco Some parents were pleased with their children’s phone-less selves. James said his mother “thought it was great that I did not have my phone because I paid more attention to her while she was talking.” One parent even proposed to join in the experiment.
But for some of the students, phones were a lifeline to their parents. As Karen Fingerman of the University of Texas at Austin wrote in a 2017 article in the journal Innovation in Aging, in the mid to late 20th century, “only half of [American] parents reported contact with a grown child at least once a week.” By contrast, she writes, recent studies find that “nearly all” parents of young adults were in weekly contact with their children, and over half were in daily contact by phone, by text message, or in person.
The city in which these students lived has one of the lowest crime rates in the world and almost no violent crime of any kind, yet they experienced a pervasive, undefined fear.
Emily wrote that without her cell phone, “I felt like I was craving some interaction from a family member. Either to keep my ass in line with the upcoming exams, or to simply let me know someone is supporting me.” Janet admitted, “The most difficult thing was defiantly [sic] not being able to talk to my mom or being able to communicate with anyone on demand or at that present moment. It was extremely stressful for my mom.” Safety was also a recurrent theme. Janet said, “Having a cell phone makes me feel secure in a way. So having that taken away from me changed my life a little. I was scared that something serious might happen during the week of not having a cell phone.” And she wondered what would have happened “if someone were to attack me or kidnap me or some sort of action along those lines or maybe even if I witnessed a crime take place, or I needed to call an ambulance.” What’s revealing is that this student and others perceived the world to be a very dangerous place. Cell phones were seen as necessary to combat that danger. The city in which these students lived has one of the lowest crime rates in the world and almost no violent crime of any kind, yet they experienced a pervasive, undefined fear.
Live in fragments no longer My students’ experience of cell phones and the social-media platforms they support may not be exhaustive, or statistically representative. But it is clear that these gadgets made them feel less alive, less connected to other people and to the world, and less productive. They also made many tasks more difficult and encouraged students to act in ways they considered unworthy of themselves. In other words, phones didn’t help them. They harmed them.
I first carried out this exercise in 2014. I repeated it last year in the bigger, more urban institution where I now teach. The occasion this time wasn’t a failed test; it was my despair over the classroom experience in its entirety. I want to be clear here—this is not personal. I have a real fondness for my students as people. But they’re abysmal students; or rather, they aren’t really students at all, at least not in my class. On any given day, 70% of them are sitting before me shopping, texting, completing assignments, watching videos, or otherwise occupying themselves. Even the “good” students do this. No one’s even trying to conceal the activity, the way students did before. This is just what they do.
In their world I’m the distraction, not their phones or their social-media profiles or their networking. Yet for what I’m supposed to be doing—educating and cultivating young hearts and minds—the consequences are pretty dark.
What’s changed? Most of what they wrote in the assignment echoed the papers I’d received in 2014. The phones were compromising their relationships, cutting them off from real things, and distracting them from more important matters. But there were two notable differences. First, for these students, even the simplest activities—getting on the bus or train, ordering dinner, getting up in the morning, even knowing where they were—required their cell phones. As the phone grew more ubiquitous in their lives, their fear of being without it seemed to grow apace. They were jittery, lost, without them.
This may help to explain the second difference: compared with the first batch, this second group displayed a fatalism about phones. Tina’s concluding remarks described it well: “Without cell phones life would be simple and real but we may not be able to cope with the world and our society. After a few days I felt alright without the phone as I got used to it. But I guess it is only fine if it is for a short period of time. One cannot hope to compete efficiently in life without a convenient source of communication that is our phones.” Compare this admission with the reaction of Peter, who a few months after the course in 2014 tossed his smartphone into a river.
I think my students are being entirely rational when they “distract” themselves in my class with their phones. They understand the world they are being prepared to enter much better than I do. In that world, I’m the distraction, not their phones or their social-media profiles or their networking. Yet for what I’m supposed to be doing—educating and cultivating young hearts and minds—the consequences are pretty dark.
Paula was about 28, a little older than most students in the class. She’d returned to college with a real desire to learn after working for almost a decade following high school. I’ll never forget the morning she gave a presentation to a class that was even more alternatively engaged than usual. After it was all over, she looked at me in despair and said, simply: “How in the world do you do this?” Ron Srigley is a writer who teaches at Humber College and Laurentian University.
hide by Ron Srigley Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2020 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Humans and technology Why embracing complexity is the real challenge in software today In the midst of industry discussions about productivity and automation, it’s all too easy to overlook the importance of properly reckoning with complexity.
By Ken Mugrage archive page Turning medical data into actionable knowledge Technology can transform patient care by bringing together data from a variety of sources By Siemens Healthineers archive page Moving data through the supply chain with unprecedented speed Tackle supply chain visibility and customer engagement with standards and “phygital” technology.
By MIT Technology Review Insights archive page Customer experience horizons The big ideas that define the future of customer and employee experience.
By MIT Technology Review Insights archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
149 | 2,019 | "A philosopher argues that an AI can’t be an artist | MIT Technology Review" | "https://www.technologyreview.com/2019/02/21/239489/a-philosopher-argues-that-an-ai-can-never-be-an-artist" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts A philosopher argues that an AI can’t be an artist Creativity is, and always will be, a human endeavor.
By Sean Dorrance Kelly archive page COURTESY OF THE ARTISTS On March 31, 1913, in the Great Hall of the Musikverein concert house in Vienna, a riot broke out in the middle of a performance of an orchestral song by Alban Berg. Chaos descended. Furniture was broken. Police arrested the concert’s organizer for punching Oscar Straus, a little-remembered composer of operettas. Later, at the trial, Straus quipped about the audience’s frustration. The punch, he insisted, was the most harmonious sound of the entire evening. History has rendered a different verdict: the concert’s conductor, Arnold Schoenberg, has gone down as perhaps the most creative and influential composer of the 20th century.
You may not enjoy Schoenberg’s dissonant music, which rejects conventional tonality to arrange the 12 notes of the scale according to rules that don’t let any predominate. But he changed what humans understand music to be. This is what makes him a genuinely creative and innovative artist. Schoenberg’s techniques are now integrated seamlessly into everything from film scores and Broadway musicals to the jazz solos of Miles Davis and Ornette Coleman.
Creativity is among the most mysterious and impressive achievements of human existence. But what is it? Creativity is not just novelty. A toddler at the piano may hit a novel sequence of notes, but they’re not, in any meaningful sense, creative. Also, creativity is bounded by history: what counts as creative inspiration in one period or place might be disregarded as ridiculous, stupid, or crazy in another. A community has to accept ideas as good for them to count as creative.
As in Schoenberg’s case, or that of any number of other modern artists, that acceptance need not be universal. It might, indeed, not come for years—sometimes creativity is mistakenly dismissed for generations. But unless an innovation is eventually accepted by some community of practice, it makes little sense to speak of it as creative.
Advances in artificial intelligence have led many to speculate that human beings will soon be replaced by machines in every domain, including that of creativity. Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being. Nick Bostrom, an Oxford philosopher, is more circumspect. He does not give a date but suggests that philosophers and mathematicians defer work on fundamental questions to “superintelligent” successors, which he defines as having “intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” Both believe that once human-level intelligence is produced in machines, there will be a burst of progress—what Kurzweil calls the “singularity” and Bostrom an “intelligence explosion”—in which machines will very quickly supersede us by massive measures in every domain. This will occur, they argue, because superhuman achievement is the same as ordinary human achievement except that all the relevant computations are performed much more quickly, in what Bostrom dubs “speed superintelligence.” So what about the highest level of human achievement—creative innovation? Are our most creative artists and thinkers about to be massively surpassed by machines? No.
Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.
This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves.
Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence.
Also, I am primarily talking about machine advances of the sort seen recently with the current deep-learning paradigm, as well as its computational successors. Other paradigms have governed AI research in the past. These have already failed to realize their promise. Still other paradigms may come in the future, but if we speculate that some notional future AI whose features we cannot meaningfully describe will accomplish wondrous things, that is mythmaking, not reasoned argument about the possibilities of technology.
Creative achievement operates differently in different domains. I cannot offer a complete taxonomy of the different kinds of creativity here, so to make the point I will sketch an argument involving three quite different examples: music, games, and mathematics.
Music to my ears Can we imagine a machine of such superhuman creative ability that it brings about changes in what we understand music to be, as Schoenberg did? That’s what I claim a machine cannot do. Let’s see why.
Computer music composition systems have existed for quite some time. In 1965, at the age of 17, Kurzweil himself, using a precursor of the pattern recognition systems that characterize deep-learning algorithms today, programmed a computer to compose recognizable music. Variants of this technique are used today. Deep-learning algorithms have been able to take as input a bunch of Bach chorales, for instance, and compose music so characteristic of Bach’s style that it fools even experts into thinking it is original. This is mimicry. It is what an artist does as an apprentice: copy and perfect the style of others instead of working in an authentic, original voice. It is not the kind of musical creativity that we associate with Bach, never mind with Schoenberg’s radical innovation.
So what do we say? Could there be a machine that, like Schoenberg, invents a whole new way of making music? Of course we can imagine, and even make, such a machine. Given an algorithm that modifies its own compositional rules, we could easily produce a machine that makes music as different from what we now consider good music as Schoenberg did then.
But this is where it gets complicated.
We count Schoenberg as a creative innovator not just because he managed to create a new way of composing music but because people could see in it a vision of what the world should be. Schoenberg’s vision involved the spare, clean, efficient minimalism of modernity. His innovation was not just to find a new algorithm for composing music; it was to find a way of thinking about what music is that allows it to speak to what is needed now.
Some might argue that I have raised the bar too high. Am I arguing, they will ask, that a machine needs some mystic, unmeasurable sense of what is socially necessary in order to count as creative? I am not—for two reasons.
First, remember that in proposing a new, mathematical technique for musical composition, Schoenberg changed our understanding of what music is. It is only creativity of this tradition-defying sort that requires some kind of social sensitivity. Had listeners not experienced his technique as capturing the anti-traditionalism at the heart of the radical modernity emerging in early-20th-century Vienna, they might not have heard it as something of aesthetic worth. The point here is that radical creativity is not an “accelerated” version of quotidian creativity. Schoenberg’s achievement is not a faster or better version of the type of creativity demonstrated by Oscar Straus or some other average composer: it’s fundamentally different in kind.
Second, my argument is not that the creator’s responsiveness to social necessity must be conscious for the work to meet the standards of genius. I am arguing instead that we must be able to interpret the work as responding that way.
It would be a mistake to interpret a machine’s composition as part of such a vision of the world. The argument for this is simple.
Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.
For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.
A molecule-for-molecule duplicate of a human being would be human in the relevant way. But we already have a way of producing such a being: it takes about nine months. At the moment, a machine can only do something much less interesting than what a person can do. It can create music in the style of Bach, for instance—perhaps even music that some experts think is better than Bach’s own. But that is only because its music can be judged against a preexisting standard. What a machine cannot do is bring about changes in our standards for judging the quality of music or of understanding what music is or is not.
This is not to deny that creative artists use whatever tools they have at their disposal, and that those tools shape the sort of art they make. The trumpet helped Davis and Coleman realize their creativity. But the trumpet is not, itself, creative. Artificial-intelligence algorithms are more like musical instruments than they are like people. Taryn Southern, a former American Idol contestant, recently released an album where the percussion, melodies, and chords were algorithmically generated, though she wrote the lyrics and repeatedly tweaked the instrumentation algorithm until it delivered the results she wanted. In the early 1990s, David Bowie did it the other way around: he wrote the music and used a Mac app called Verbalizer to pseudorandomly recombine sentences into lyrics. Just like previous tools of the music industry—from recording devices to synthesizers to samplers and loopers—new AI tools work by stimulating and channeling the creative abilities of the human artist (and reflect the limitations of those abilities).
Games without frontiers Much has been written about the achievements of deep-learning systems that are now the best Go players in the world. AlphaGo and its variants have strong claims to having created a whole new way of playing the game. They have taught human experts that opening moves long thought to be ill-conceived can lead to victory. The program plays in a style that experts describe as strange and alien. “They’re how I imagine games from far in the future,” Shi Yue, a top Go player, said of AlphaGo’s play. The algorithm seems to be genuinely creative.
In some important sense it is. Game-playing, though, is different from composing music or writing a novel: in games there is an objective measure of success. We know we have something to learn from AlphaGo because we see it win.
But that is also what makes Go a “toy domain,” a simplified case that says only limited things about the world.
The most fundamental sort of human creativity changes our understanding of ourselves because it changes our understanding of what we count as good. For the game of Go, by contrast, the nature of goodness is simply not up for grabs: a Go strategy is good if and only if it wins. Human life does not generally have this feature: there is no objective measure of success in the highest realms of achievement. Certainly not in art, literature, music, philosophy, or politics. Nor, for that matter, in the development of new technologies.
In various toy domains, machines may be able to teach us about a certain very constrained form of creativity. But the domain’s rules are pre-formed; the system can succeed only because it learns to play well within these constraints. Human culture and human existence are much more interesting than this. There are norms for how human beings act, of course. But creativity in the genuine sense is the ability to change those norms in some important human domain. Success in toy domains is no indication that creativity of this more fundamental sort is achievable.
It’s a knockout A skeptic might contend that the argument works only because I’m contrasting games with artistic genius. There are other paradigms of creativity in the scientific and mathematical realm. In these realms, the question isn’t about a vision of the world. It is about the way things actually are.
Might a machine come up with mathematical proofs so far beyond us that we simply have to defer to its creative genius? No.
Computers have already assisted with notable mathematical achievements. But their contributions haven’t been particularly creative. Take the first major theorem proved using a computer: the four-color theorem, which states that any flat map can be colored with at most four colors in such a way that no two adjacent “countries” end up with the same one (it also applies to countries on the surface of a globe).
Nearly a half-century ago, in 1976, Kenneth Appel and Wolfgang Haken at the University of Illinois published a computer-assisted proof of this theorem. The computer performed billions of calculations, checking thousands of different types of maps—so many that it was (and remains) logistically unfeasible for humans to verify that each possibility accorded with the computer’s view. Since then, computers have assisted in a wide range of new proofs.
But the supercomputer is not doing anything creative by checking a huge number of cases. Instead, it is doing something boring a huge number of times. This seems like almost the opposite of creativity. Furthermore, it is so far from the kind of understanding we normally think a mathematical proof should offer that some experts don’t consider these computer-assisted strategies mathematical proofs at all. As Thomas Tymoczko, a philosopher of mathematics, has argued, if we can’t even verify whether the proof is correct, then all we are really doing is trusting in a potentially error-prone computational process.
Even supposing we do trust the results, however, computer-assisted proofs are something like the analogue of computer-assisted composition. If they give us a worthwhile product, it is mostly because of the contribution of the human being. But some experts have argued that artificial intelligence will be able to achieve more than this. Let us suppose, then, that we have the ultimate: a self-reliant machine that proves new theorems all on its own.
Could a machine like this massively surpass us in mathematical creativity, as Kurzweil and Bostrom argue? Suppose, for instance, that an AI comes up with a resolution to some extremely important and difficult open problem in mathematics.
The capacity for genuine creativity, the kind of creativity that updates our understanding of the nature of being, is at the ground of what it is to be human.
There are two possibilities. The first is that the proof is extremely clever, and when experts in the field go over it they discover that it is correct. In this case, the AI that discovered the proof would be applauded. The machine itself might even be considered to be a creative mathematician. But such a machine would not be evidence of the singularity; it would not so outstrip us in creativity that we couldn’t even understand what it was doing. Even if it had this kind of human-level creativity, it wouldn’t lead inevitably to the realm of the superhuman.
Some mathematicians are like musical virtuosos: they are distinguished by their fluency in an existing idiom. But geniuses like Srinivasa Ramanujan, Emmy Noether, and Alexander Grothendieck arguably reshaped mathematics just as Schoenberg reshaped music. Their achievements were not simply proofs of long-standing hypotheses but new and unexpected forms of reasoning, which took hold not only on the strength of their logic but also on their ability to convince other mathematicians of the significance of their innovations. A notional AI that comes up with a clever proof to a problem that has long befuddled human mathematicians is akin to AlphaGo and its variants: impressive, but nothing like Schoenberg.
That brings us to the other option. Suppose the best and brightest deep-learning algorithm is set loose and after some time says, “I’ve found a proof of a fundamentally new theorem, but it’s too complicated for even your best mathematicians to understand.” This isn’t actually possible. A proof that not even the best mathematicians can understand doesn’t really count as a proof. Proving something implies that you are proving it to someone.
Just as a musician has to persuade her audience to accept her aesthetic concept of what is good music, a mathematician has to persuade other mathematicians that there are good reasons to believe her vision of the truth. To count as a valid proof in mathematics, a claim must be understandable and endorsable by some independent set of experts who are in a good position to understand it. If the experts who should be able to understand the proof can’t, then the community refuses to endorse it as a proof.
For this reason, mathematics is more like music than one might have thought. A machine could not surpass us massively in creativity because either its achievement would be understandable, in which case it would not massively surpass us, or it would not be understandable, in which case we could not count it as making any creative advance at all.
The eye of the beholder Engineering and applied science are, in a way, somewhere between these examples. There is something like an objective, external measure of success. You can’t “win” at bridge building or medicine the way you can at chess, but one can see whether the bridge falls down or the virus is eliminated. These objective criteria come into play only once the domain is fairly well specified: coming up with strong, lightweight materials, say, or drugs that combat particular diseases. An AI might help in drug discovery by, in effect, doing the same thing as the AI that composed what sounded like a well-executed Bach cantata or came up with a brilliant Go strategy. Like a microscope, telescope, or calculator, such an AI is properly understood as a tool that enables human discovery—not as an autonomous creative agent.
It’s worth thinking about the theory of special relativity here. Albert Einstein is remembered as the “discoverer” of relativity—but not because he was the first to come up with equations that better describe the structure of space and time. George Fitzgerald, Hendrik Lorentz, and Henri Poincaré, among others, had written down those equations before Einstein. He is acclaimed as the theory’s discoverer because he had an original, remarkable, and true understanding of what the equations meant and could convey that understanding to others.
For a machine to do physics that is in any sense comparable to Einstein’s in creativity, it must be able to persuade other physicists of the worth of its ideas at least as well as he did. Which is to say, we would have to be able to accept its proposals as aiming to communicate their own validity to us.
Should such a machine ever come into being, as in the parable of Pinocchio, we would have to treat it as we would a human being. That means, among other things, we would have to attribute to it not only intelligence but whatever dignity and moral worth is appropriate to human beings as well. We are a long way off from this scenario, it seems to me, and there is no reason to think the current computationalist paradigm of artificial intelligence—in its deep-learning form or any other—will ever move us closer to it.
Creativity is one of the defining features of human beings. The capacity for genuine creativity, the kind of creativity that updates our understanding of the nature of being, that changes the way we understand what it is to be beautiful or good or true—that capacity is at the ground of what it is to be human. But this kind of creativity depends upon our valuing it, and caring for it, as such. As the writer Brian Christian has pointed out, human beings are starting to act less like beings who value creativity as one of our highest possibilities, and more like machines themselves.
How many people today have jobs that require them to follow a predetermined script for their conversations? How little of what we know as real, authentic, creative, and open-ended human conversation is left in this eviscerated charade? How much is it like, instead, the kind of rule-following that a machine can do? And how many of us, insofar as we allow ourselves to be drawn into these kinds of scripted performances, are eviscerated as well? How much of our day do we allow to be filled with effectively machine-like activities—filling out computerized forms and questionnaires, responding to click-bait that works on our basest, most animal-like impulses, playing games that are designed to optimize our addictive response? We are in danger of this confusion in some of the deepest domains of human achievement as well. If we allow ourselves to say that machine proofs we cannot understand are genuine “proofs,” for example, ceding social authority to machines, we will be treating the achievements of mathematics as if they required no human understanding at all. We will be taking one of our highest forms of creativity and intelligence and reducing it to a single bit of information: yes or no.
Even if we had that information, it would be of little value to us without some understanding of the reasons underlying it. We must not lose sight of the essential character of reasoning, which is at the foundation of what mathematics is.
So too with art and music and philosophy and literature. If we allow ourselves to slip in this way, to treat machine “creativity” as a substitute for our own, then machines will indeed come to seem incomprehensibly superior to us. But that is because we will have lost track of the fundamental role that creativity plays in being human.
Sean Dorrance Kelly is a philosophy professor at Harvard and coauthor of the New York Times best-selling book All Things Shining.
hide by Sean Dorrance Kelly Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our March/April 2019 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
150 | 2,020 | "The coming war on the hidden algorithms that trap people in poverty | MIT Technology Review" | "https://www.technologyreview.com/2020/12/04/1013068/algorithms-create-a-poverty-trap-lawyers-fight-back" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The coming war on the hidden algorithms that trap people in poverty A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services.
By Karen Hao archive page Daniel Zender Miriam was only 21 when she met Nick. She was a photographer, fresh out of college, waiting tables. He was 16 years her senior and a local business owner who had worked in finance. He was charming and charismatic; he took her out on fancy dates and paid for everything. She quickly fell into his orbit.
It began with one credit card. At the time, it was the only one she had. Nick would max it out with $5,000 worth of business purchases and promptly pay it off the next day. Miriam, who asked me not to use their real names for fear of interfering with their ongoing divorce proceedings, discovered that this was boosting her credit score. Having grown up with a single dad in a low-income household, she trusted Nick’s know-how over her own. He readily encouraged the dynamic, telling her she didn’t understand finance. She opened up more credit cards for him under her name.
The trouble started three years in. Nick asked her to quit her job to help out with his business. She did. He told her to go to grad school and not worry about compounding her existing student debt. She did. He promised to take care of everything, and she believed him. Soon after, he stopped settling her credit card balances. Her score began to crater.
Still, Miriam stayed with him. They got married. They had three kids. Then one day, the FBI came to their house and arrested him. In federal court, the judge convicted him on nearly $250,000 of wire fraud. Miriam discovered the full extent of the tens of thousands of dollars in debt he’d racked up in her name. “The day that he went to prison, I had $250 cash, a house in foreclosure, a car up for repossession, three kids,” she says. “I went within a month from having a nanny and living in a nice house and everything to just really abject poverty.” Miriam is a survivor of what’s known as “coerced debt,” a form of abuse usually perpetrated by an intimate partner or family member. While economic abuse is a long-standing problem, digital banking has made it easier to open accounts and take out loans in a victim’s name, says Carla Sanchez-Adams, an attorney at Texas RioGrande Legal Aid. In the era of automated credit-scoring algorithms, the repercussions can also be far more devastating.
Credit scores have been used for decades to assess consumer creditworthiness, but their scope is far greater now that they are powered by algorithms: not only do they consider vastly more data, in both volume and type, but they increasingly affect whether you can buy a car, rent an apartment, or get a full-time job. Their comprehensive influence means that if your score is ruined, it can be nearly impossible to recover. Worse, the algorithms are owned by private companies that don’t divulge how they come to their decisions. Victims can be sent in a downward spiral that sometimes ends in homelessness or a return to their abuser.
Credit-scoring algorithms are not the only ones that affect people’s economic well-being and access to basic services. Algorithms now decide which children enter foster care, which patients receive medical care, which families get access to stable housing. Those of us with means can pass our lives unaware of any of this. But for low-income individuals, the rapid growth and adoption of automated decision-making systems has created a hidden web of interlocking traps.
Fortunately, a growing group of civil lawyers are beginning to organize around this issue. Borrowing a playbook from the criminal defense world’s pushback against risk-assessment algorithms, they’re seeking to educate themselves on these systems, build a community, and develop litigation strategies. “Basically every civil lawyer is starting to deal with this stuff, because all of our clients are in some way or another being touched by these systems,” says Michele Gilman, a clinical law professor at the University of Baltimore. “We need to wake up, get training. If we want to be really good holistic lawyers, we need to be aware of that.” “Am I going to cross-examine an algorithm?” Gilman has been practicing law in Baltimore for 20 years. In her work as a civil lawyer and a poverty lawyer, her cases have always come down to the same things: representing people who’ve lost access to basic needs, like housing, food, education, work, or health care. Sometimes that means facing off with a government agency. Other times it’s with a credit reporting agency, or a landlord. Increasingly, the fight over a client’s eligibility now involves some kind of algorithm.
“This is happening across the board to our clients,” she says. “They’re enmeshed in so many different algorithms that are barring them from basic services. And the clients may not be aware of that, because a lot of these systems are invisible.” She doesn’t remember exactly when she realized that some eligibility decisions were being made by algorithms. But when that transition first started happening, it was rarely obvious. Once, she was representing an elderly, disabled client who had inexplicably been cut off from her Medicaid-funded home health-care assistance. “We couldn’t find out why,” Gilman remembers. “She was getting sicker, and normally if you get sicker, you get more hours, not less.” Not until they were standing in the courtroom in the middle of a hearing did the witness representing the state reveal that the government had just adopted a new algorithm. The witness, a nurse, couldn’t explain anything about it. “Of course not—they bought it off the shelf,” Gilman says. “She’s a nurse, not a computer scientist. She couldn’t answer what factors go into it. How is it weighted? What are the outcomes that you’re looking for? So there I am with my student attorney, who’s in my clinic with me, and it’s like, ‘Oh, am I going to cross-examine an algorithm?’” For Kevin De Liban, an attorney at Legal Aid of Arkansas, the change was equally insidious. In 2014, his state also instituted a new system for distributing Medicaid-funded in-home assistance, cutting off a whole host of people who had previously been eligible. At the time, he and his colleagues couldn’t identify the root problem. They only knew that something was different. “We could recognize that there was a change in assessment systems from a 20-question paper questionnaire to a 283-question electronic questionnaire,” he says.
It was two years later, when an error in the algorithm once again brought it under legal scrutiny, that De Liban finally got to the bottom of the issue. He realized that nurses were telling patients, “Well, the computer did it—it’s not me.” “That’s what tipped us off,” he says. “If I had known what I knew in 2016, I would have probably done a better job advocating in 2014,” he adds.
“One person walks through so many systems on a day-to-day basis” Gilman has since grown a lot more savvy. From her vantage point representing clients with a range of issues, she’s observed the rise and collision of two algorithmic webs. The first consists of credit-reporting algorithms, like the ones that snared Miriam, which affect access to private goods and services like cars, homes, and employment. The second encompasses algorithms adopted by government agencies, which affect access to public benefits like health care, unemployment, and child support services.
On the credit-reporting side, the growth of algorithms has been driven by the proliferation of data, which is easier than ever to collect and share. Credit reports aren’t new, but these days their footprint is far more expansive. Consumer reporting agencies, including credit bureaus, tenant screening companies, or check verification services, amass this information from a wide range of sources: public records, social media, web browsing, banking activity, app usage, and more. The algorithms then assign people “worthiness” scores, which figure heavily into background checks performed by lenders, employers, landlords, even schools.
Government agencies, on the other hand, are driven to adopt algorithms when they want to modernize their systems. The push to adopt web-based apps and digital tools began in the early 2000s and has continued with a move toward more data-driven automated systems and AI. There are good reasons to seek these changes. During the pandemic, many unemployment benefit systems struggled to handle the massive volume of new requests, leading to significant delays. Modernizing these legacy systems promises faster and more reliable results.
But the software procurement process is rarely transparent, and thus lacks accountability. Public agencies often buy automated decision-making tools directly from private vendors. The result is that when systems go awry, the individuals affected——and their lawyers—are left in the dark. “They don’t advertise it anywhere,” says Julia Simon-Mishel, an attorney at Philadelphia Legal Assistance. “It’s often not written in any sort of policy guides or policy manuals. We’re at a disadvantage.” The lack of public vetting also makes the systems more prone to error. One of the most egregious malfunctions happened in Michigan in 2013. After a big effort to automate the state’s unemployment benefits system, the algorithm incorrectly flagged over 34,000 people for fraud.
“It caused a massive loss of benefits,” Simon-Mishel says. “There were bankruptcies; there were unfortunately suicides. It was a whole mess.” Low-income individuals bear the brunt of the shift toward algorithms. They are the people most vulnerable to temporary economic hardships that get codified into consumer reports, and the ones who need and seek public benefits. Over the years, Gilman has seen more and more cases where clients risk entering a vicious cycle. “One person walks through so many systems on a day-to-day basis,” she says. “I mean, we all do. But the consequences of it are much more harsh for poor people and minorities.” She brings up a current case in her clinic as an example. A family member lost work because of the pandemic and was denied unemployment benefits because of an automated system failure. The family then fell behind on rent payments, which led their landlord to sue them for eviction. While the eviction won’t be legal because of the CDC’s moratorium , the lawsuit will still be logged in public records. Those records could then feed into tenant-screening algorithms, which could make it harder for the family to find stable housing in the future. Their failure to pay rent and utilities could also be a ding on their credit score, which once again has repercussions. “If they are trying to set up cell-phone service or take out a loan or buy a car or apply for a job, it just has these cascading ripple effects,” Gilman says.
“Every case is going to turn into an algorithm case” In September, Gilman, who is currently a faculty fellow at the Data and Society research institute, released a report documenting all the various algorithms that poverty lawyers might encounter. Called Poverty Lawgorithms , it’s meant to be a guide for her colleagues in the field. Divided into specific practice areas like consumer law, family law, housing, and public benefits, it explains how to deal with issues raised by algorithms and other data-driven technologies within the scope of existing laws.
If a client is denied an apartment because of a poor credit score, for example, the report recommends that a lawyer first check whether the data being fed into the scoring system is accurate. Under the Fair Credit Reporting Act, reporting agencies are required to ensure the validity of their information, but this doesn’t always happen. Disputing any faulty claims could help restore the client’s credit and, thus, access to housing. The report acknowledges, however, that existing laws can only go so far. There are still regulatory gaps to fill, Gilman says.
Gilman hopes the report will be a wake-up call. Many of her colleagues still don’t realize any of this is going on, and they aren’t able to ask the right questions to uncover the algorithms. Those who are aware of the problem are scattered around the US, learning about, navigating, and fighting these systems in isolation. She sees an opportunity to connect them and create a broader community of people who can help one another. “We all need more training, more knowledge—not just in the law, but in these systems,” she says. “Ultimately it’s like every case is going to turn into an algorithm case.” Related Story In the long run, she looks to the criminal legal world for inspiration. Criminal lawyers have been “ahead of the curve,” she says, in organizing as a community and pushing back against risk-assessment algorithms that determine sentencing. She wants to see civil lawyers do the same thing: create a movement to bring more public scrutiny and regulation to the hidden web of algorithms their clients face. “In some cases, it probably should just be shut down because there’s no way to make it equitable,” she says.
As for Miriam, after Nick’s conviction, she walked away for good. She moved with her three kids to a new state and connected with a nonprofit that supports survivors of coerced debt and domestic violence. Through them, she took a series of classes that taught her how to manage her finances. The organization helped her dismiss many of her coerced debts and learn more about credit algorithms. When she went to buy a car, her credit score just barely cleared the minimum with her dad as co-signer. Since then, her consistent payments on her car and her student debt have slowly replenished her credit score.
Miriam still has to stay vigilant. Nick has her Social Security number, and they’re not yet divorced. She worries constantly that he could open more accounts, take out more loans in her name. For a while, she checked her credit report daily for fraudulent activity. But these days, she also has something to look forward to. Her dad, in his mid-60s, wants to retire and move in. The two of them are now laser-focused on preparing to buy a home. “I’m pretty psyched about it. My goal is by the end of the year to get it to a 700,” she says of her score, “and then I am definitely home-buyer ready.” “I’ve never lived in a house that I’ve owned, ever,” she adds. “He and I are working together to save for a forever home.” hide by Karen Hao Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
151 | 2,020 | "Why it’s too early to start giving out “immunity passports” | MIT Technology Review" | "https://www.technologyreview.com/2020/04/09/998974/immunity-passports-cornavirus-antibody-test-outside" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Why it’s too early to start giving out “immunity passports” By Neel V. Patel archive page Imagine, a few weeks or months from now, having a covid-19 test kit sent to your home. It’s small and portable, but pretty easy to figure out. You prick your finger as in a blood sugar test for diabetics, wait maybe 15 minutes, and bam—you now know whether or not you’re immune to coronavirus.
If you are, you can request government-issued documentation that says so. This is your “immunity passport.” You are now free to leave your home, go back to work, and take part in all facets of normal life—many of which are in the process of being booted back up by “immunes” like yourself.
Pretty enticing, right? Some countries are taking the idea seriously. German researchers want to send out hundreds of thousands of tests to citizens over the next few weeks to see who is immune to covid-19 and who is not, and certify people as being healthy enough to return to society. The UK, which has stockpiled over 17.5 million home antibody testing kits, has raised the prospect of doing something similar, although this has come under major scrutiny from scientists who have raised concerns that the test may not be accurate enough to be useful. As the pressure builds from a public that has been cooped up for weeks, more countries are looking for a way out of strict social distancing measures that doesn’t require waiting 12 to 18 months for a vaccine (if one even comes).
So how does immunity testing work? Very soon after infection by SARS-CoV-2, polymerase chain reaction (PCR) tests can be used to look for evidence of the virus in the respiratory tract. These tests work by greatly amplifying viral genetic material so we can verify what virus it comes from. But weeks or months after the immune system has fought the virus off, it’s better to test for antibodies.
More on coronavirus Our most essential coverage of covid-19 is free, including: What is herd immunity? What is serological testing? How does the coronavirus work? What are the potential treatments? Which drugs work best? What's the right way to do social distancing? Other frequently asked questions about coronavirus --- Newsletter: Coronavirus Tech Report Zoom show: Radio Corona See also: All our covid-19 coverage The covid-19 special issue Please click here to subscribe and support our non-profit journalism.
About six to 10 days after viral exposure , the body begins to develop antibodies that bind and react specifically to the proteins found on SARS-CoV-2. The first antibody produced is called immunoglobulin m (IgM), which is short-lived and only stays in the bloodstream for a few weeks. The immune system refines the antibodies and just a few days later will start producing immunoglobulins G (IgG) and A (IgA), which are much more specific. IgG stays in the blood and can confer immunity for months, years, or a lifetime, depending on the disease it’s protecting against.
In someone who has survived infection with covid-19, the blood should, presumably, possess these antibodies, which will then protect against subsequent infection by the SARS-CoV-2 virus. Knowing whether someone is immune (and eligible for potential future certification) hinges on serological testing, drawing blood to look for signs of these antibodies. Get a positive test and, in theory, that person is now safe to walk the street again and get the economy moving. Simple.
Except it’s not. There are some serious problems with trying to use the tests to determine immunity status. For example, we still know very little about what human immunity to the disease looks like, how long it lasts, whether an immune response prevents reinfection , and whether you might still be contagious even after symptoms have dissipated and you’ve developed IgG antibodies. Immune responses vary greatly between patients, and we still don’t know why. Genetics could play a role.
“We’ve only known about this virus for four months,” says Donald Thea, a professor of global health at Boston University. “There’s a real paucity of data out there.” SARS-CoV-1, the virus that causes SARS and whose genome is about 76% similar to that of SARS-CoV-2, seems to elicit an immunity that lasts up to three years.
Other coronaviruses that cause the common cold seem to elicit a far shorter immunity, although the data on that is limited—perhaps, says Thea, because there has been far less urgency to study them in such detail. It’s too early to tell right now where SARS-CoV-2 will fall in that time range.
Even without that data, dozens of groups in the US and around the world are developing covid-19 tests for antibodies. Many of these are rapid tests that can be taken at the point of care or even at home, and deliver results in just a matter of minutes. One US company, Scanwell Health, has licensed a covid-19 antibody test from the Chinese company Innovita that can look for SARS-CoV-2 IgM and IgG antibodies through just a finger-prick blood sample and give results in 13 minutes.
There are two key criteria we look for when we’re evaluating the accuracy of an antibody test. One is sensitivity, the ability to detect what it’s supposed to detect (in this case antibodies). The other is specificity, the ability to detect the particular antibodies it is looking for. Scanwell’s chief medical officer, Jack Jeng, says clinical trials in China showed that the Innovita test achieved 87.3% sensitivity and 100% specificity (these results are unpublished). That means it will not target the wrong kind of antibodies and won’t deliver any false positives (people incorrectly deemed immune), but it will not be able to tag any antibodies in 12.7% of all the samples it analyzes—those samples would come up as false negatives (people incorrectly deemed not immune).
By comparison, Cellex, which is the first company to get a rapid covid-19 antibody test approved by the FDA, has a sensitivity of 93.8% and a specificity of 95.6%. Others are also trumpeting their own tests’ vital stats. Jacky Zhang, chairman and CEO of Beroni Group, says his company’s antibody test has a sensitivity of 88.57% and a specificity of 100%, for example. Allan Barbieri of Biomerica says his company’s test is over 90% sensitive. The Mayo Clinic is making available its own covid-19 serological test to look for IgG antibodies, which Elitza Theel, the clinic’s director of clinical microbiology, says has 95% specificity.
The specificity and sensitivity rates work a bit like opposing dials. Increased sensitivity can reduce specificity by a bit, because the test is better able to react with any antibodies in the sample, even ones you aren’t trying to look for. Increasing specificity can lower sensitivity, because the slightest differences in the molecular structure of the antibodies (which is normal) could prevent the test from finding those targets.
“It really depends on what your purpose is,” says Robert Garry, a virologist at Tulane University. Sensitivity and specificity rates of 95% or higher, he says, are considered a high benchmark, but those numbers are difficult to hit; 90% is considered clinically useful, and 80 to 85% is epidemiologically useful. Higher rates are difficult to achieve for home testing kits.
But the truth is, a test that is 95% accurate isn’t much use at all. Even the smallest errors can blow up over a large population. Let’s say coronavirus has infected 5% of the population. If you test a million people at random, you ought to find 50,000 positive results and 950,000 negative results. But if the test is 95% sensitive and specific, it will correctly identify only 47,500 positive results and 902,500 negative results. That leaves 50,000 people who have a false result. That’s 2,500 people who are actually positive—immune—but are not getting an immunity passport and must stay home. That’s bad enough. But even worse is that a whopping 47,500 people who are actually negative—not immune—could incorrectly test positive. Half of the 95,000 people who are told they are immune and free to go about their business might never have been infected yet.
Because we don’t know what the real infection rate is—1%, 3%, 5%, etc.—we don’t know how to truly predict what proportion of the immunity passports would be issued incorrectly. The lower the infection rate, the more devastating the effects of the antibody tests’ inaccuracies. The higher the infection rate, the more confident we can be that a positive result is real.
And people with false positive results would unwittingly be walking hazards who could become infected and spread the virus, whether they developed symptoms or not. A certification system would have to test people repeatedly for several weeks before they could be issued a passport to return to work—and even then, this would only reduce the risk, not eliminate it outright.
As mentioned, cross-reactivity with other antibodies, especially ones that target other coronaviruses, is another concern. “There are six different coronaviruses known to infect humans,” says Thea. “And it’s entirely possible if you got a garden-variety coronavirus infection in November, and you did not get covid-19, you could still test positive for the SARS-CoV-2 antibodies.” Lee Gehrke, a virologist and biotechnology researcher at Harvard and MIT, whose company E25Bio is also developing serological tests for covid-19, raises another issue. “It's not yet immediately clear,” he says, “that the antibodies these tests pick up are neutralizing.” In other words, the antibodies detected in the test may not necessarily act against the virus to stop it and protect the body—they simply react to it, probably to tag the pathogen for destruction by other parts of the immune system.
Gehrke says he favors starting with a smaller-scale, in-depth study of serum samples from confirmed patients that defines more closely what the neutralizing antibodies are. This would be an arduous trial, “but I think it would be much more reassuring to have this done in the US before we take serological testing to massive scale,” he says.
Alan Wells, the medical director of clinical laboratories at the University of Pittsburgh Medical Center, raises a similar point. He says that some patients who survive infection and are immune may simply not generate the antibodies you’re looking for. Or they may generate them at low levels that do not actually confer immunity, as some Chinese researchers claim to have found.
“I would shudder to use IgM and IgG testing to figure out who’s immune and who’s not,” says Wells. “These tests are not ready for that.” Even if the technology is more accurate, it might still simply be too early to start certifying immunity just to open up the economy. Chris Murray from the University of Washington’s Institute for Health Metrics and Evaluation told NPR his group’s models predict that come June, “at least 95% of the US will still be susceptible to the virus,” leaving them vulnerable to infection by the time a possible second wave comes around in the winter. Granting immunity passports to less than 5% of the workforce may not be all that worthwhile.
Theel says that instead of being used to issue individual immunity passports, serology tests could be deployed en masse, over a long period of time, to see if herd immunity has set in —lifting or easing restrictions wholesale after 60 to 70% of a community’s population tests positive for immunity. There are a few case studies that hold promise. San Miguel County in Colorado has partnered with biotech company United Biomedical in an attempt to serologically test everyone in the county. The community is small and isolated, and therefore easier to test comprehensively. Iceland has been doing the same thing across the country.
This would require a massively organized effort to pull off well in highly populated areas, and it’s not clear whether the decentralized American health-care system could do it. But it’s probably worth thinking about if we hope to reopen whole economies, and not just give a few individuals a get-out-of-jail-free card.
Not everyone is so skeptical about using serological testing on a case-by-case basis. Thea thinks the data right now suggests SARS-CoV-2 should behave like its close cousin SARS-CoV-1, resulting in an immunity that lasts for a maybe a couple of years. “With that in mind, it’s not unreasonable to identify individuals who are immune from reinfection,” he says. “We can have our cake and eat it too. We can begin to repopulate the workforce—most importantly the health-care workers.” For instance, in hard-hit cities like New York that are suffering from a shortage of health-care workers, a serological test could help nurses and doctors figure out who might be immune, and therefore better equipped to work in the ICU or conduct procedures that put them at a high risk of exposure to the virus, until a vaccine comes along.
And at the very least, serological testing is potentially useful because many covid-19 cases present, at most, only mild symptoms that don’t require any kind of medical intervention. About 18% of infected passengers on the Diamond Princess cruise ship showed no symptoms whatsoever , suggesting there may be a huge number of asymptomatic cases. These people almost certainly aren’t being tested ( CDC guidelines for covid-19 testing specifically exclude those without symptoms ). But their bodies are still producing antibodies that should be detectable long after the infection is cleared. If they develop immunity to covid-19 that’s provable, then in theory, they could freely leave the house once again.
For now, however, there are too many problems and unknowns to use antibody testing to decide who gets an immunity passport and who doesn’t. Countries now considering it might find out they will either have to accept enormous risks or simply sit tight for longer than initially hoped.
Correction : The initial version of the story incorrectly stated: “The higher the infection rate, the more devastating the effects of the antibody tests’ inaccuracies.
” A higher infection would actually produce more confident antibody test results. We regret the error.
hide by Neel V. Patel Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain.
By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative.
By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
152 | 2,020 | "Singapore is the model for how to handle the coronavirus | MIT Technology Review" | "https://www.technologyreview.com/s/615353/singapore-is-the-model-for-how-to-handle-the-coronavirus" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Singapore is the model for how to handle the coronavirus By Spencer Wells archive page singapore during coronavirus AP Images I began writing this at Raffles Hotel, a gleaming white pinnacle of Singapore’s British colonial past. Immaculately renovated over the past two and a half years, it is truly one of the world’s most luxurious hotels. In many ways, it epitomizes what Singapore has become since it asserted itself as an independent city-state in 1965.
Lee Kuan Yew, the founding prime minister of Singapore, was a visionary statesman, both strongman and technocrat. Revered here as founder, leader, truth-teller, and symbol of the young nation, he created the playbook for modern Singapore, including among other things a commitment to transparency, a belief in the power of reason over superstition, and a love of cleanliness. All these have combined to create Singapore’s world-leading response to the coronavirus that emerged in China at the end of last year, spreading rapidly around the globe over the past two months.
More on coronavirus Our most essential coverage of covid-19 is free, including: What is herd immunity? What is serological testing? How does the coronavirus work? What are the potential treatments? Which drugs work best? What's the right way to do social distancing? Other frequently asked questions about coronavirus --- Newsletter: Coronavirus Tech Report Zoom show: Radio Corona See also: All our covid-19 coverage The covid-19 special issue Please click here to subscribe and support our non-profit journalism.
Singapore was hit early, as one of China’s key trading partners. Within a few weeks of the first official notice of “Wuhan flu,” it had a dozen cases. But it very quickly realized that this was more than the seasonal flu, and took rapid action. Primed by experience with the SARS virus of 2002-3, Singapore began carefully tracking cases to find the commonalities that linked them. Within a day, sometimes two, of a new case being detected, the authorities were able to piece together the complex chain of transmission from one person to another, like Sherlock Holmes with a database. As of February, everyone entering a government or corporate building in Singapore had to provide contact details to expedite the process.
It’s not simply the ability to detect the cases and explain why they happened that makes Singapore such a role model in this epidemic; nucleic acid testing kits were rapidly developed and deployed to ports of entry. Within three hours, while individuals are quarantined on-site, officials can confirm whether or not they are infected with the virus before allowing them to enter.
The response in the US has essentially been the opposite. Early on, most people seemed to assume it was a “Chinese,” or perhaps an “Asian,” issue—pandemics don’t happen in the US! This arrogant complacency allowed the public health authorities to let down their guard. Dozens of infected people, perhaps more, were allowed into the US and allowed—even encouraged—to go to work sick, hastening the spread of the virus.
When some of these people became ill with symptoms of Covid-19 and asked to be tested, they were refused because they didn’t have a direct connection to China, or they weren’t sick enough. It was a bit of a moot point, though, since the testing kit developed and distributed by the CDC was faulty and couldn’t be used. This unconscionable delay in testing, coupled with the fact that 25% of American workers lack sick leave, effectively forced people to return to work, spreading the infection further.
As I finish writing this, I am far up river in the remote southern part of Borneo, near Camp Leakey, Biruté Galdikas’s 50-year-old orangutan research center. I’ve been “off the grid” for the past two days, and am posting this via satellite. When I left, things were not looking good for the US, and (predictably) the virus had further polarized our already deeply divided country. The funny thing about viruses, though, is that they don’t care about political parties, or national boundaries, or net worth. All they care about is reproducing. And this one seems to be particularly good at it.
When I decided to go through with my travel plans to Southeast Asia, many people told me I was crazy. “You’re flying into the eye of the storm!” some said, looking at the infection numbers as of a few weeks ago. Now I can’t help but feel the same sense of surprise and horror at what’s unfolding back at home. The US and Europe are now the centers of the storm. Good luck to us all.
Spencer Wells is a geneticist, anthropologist, and a former Explorer-in-Residence at the National Geographic Society.
hide by Spencer Wells Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward.
By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship.
By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be.
New York City is fixing the relationship between government and technology–and not in the ways you’d expect.
By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated.
By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
153 | 2,020 | "A coronavirus vaccine will take at least 18 months—if it works at all | MIT Technology Review" | "https://www.technologyreview.com/s/615331/a-coronavirus-vaccine-will-take-at-least-18-monthsif-it-works-at-all" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts A coronavirus vaccine will take at least 18 months—if it works at all By Antonio Regalado archive page Trump's coronavirus taskforce AP Images This story is part of our ongoing coverage of the coronavirus/Covid-19 outbreak.
You can also sign up to our dedicated newsletter.
During a press opportunity on March 2, a dozen biotech company executives joined President Donald Trump around the same wooden table where his cabinet meets.
As each took a turn saying what they could add to the fight against the spreading coronavirus, Trump was interested in knowing exactly how soon a countermeasure might be ready.
But only one presenter—Stéphane Bancel, the CEO of Moderna Pharmaceuticals in Cambridge, Massachusetts—could say that just weeks into the outbreak his company had already delivered a candidate vaccine into the hands of the government for testing.
“So you are talking over the next few months you think you could have a vaccine?” Trump said, looking impressed.
“Correct,” said Bancel, whose company is pioneering a new type of gene-based vaccine. It had been, he said, just a matter of “a few phone calls” with the right people.
Drugs advance through stages: first safety testing, then wider tests of efficacy. Bancel said he meant that a Phase 2 test, an early round of efficacy testing, might begin by summer. But it was not clear if Trump heard it the same way.
“You wouldn’t have a vaccine. You would have a vaccine to go into testing,” interjected Anthony Fauci, head of the National Institutes of Allergy and Infectious Disease, who has advised six presidents, starting with Ronald Reagan during the HIV epidemic.
“How long would that take?” Trump wanted to know.
“Like I have been telling you, a year to a year-and-a-half,” Fauci said. Trump said he liked the sound of two months a lot better.
The White House coronavirus event showed how biotech and drug companies have jumped in to meet the contagion threat using speedy new technology. Also present were representatives of Regeneron Pharmaceuticals, CureVac, and Inovio Pharmaceuticals, which tested a gene vaccine against Zika and says a safety study of its own candidate coronavirus could begin in April.
But lost in the hype over the fast new vaccines is the reality that technologies such as the one being developed by Moderna are still unproven. No one, in fact, knows whether they will work.
Moderna makes “mRNA vaccines”—basically, it embeds the genetic instructions for a component of a virus into a nanoparticle, which can then be injected into a person. Although new methods like Moderna’s are lightning fast to prepare, they have never led to a licensed vaccine for sale.
What’s more, despite the fast start, any vaccine needs to prove that it’s safe and that it protects people from infection. Those steps are what lock in the inconvenient 18-month time line Fauci cited. While a safety test might take only three months, the vaccine would then need to be given to hundreds or thousands of people at the core of an outbreak to see if recipients are protected. That could take a year no matter what technology is employed.
Vaccine hope and hype In late February, shares prices for Moderna Pharmaceuticals soared 30% when the company announced it had delivered doses of the first coronavirus vaccine candidate to the National Institutes of Health, pushing its stock market valuation to around $11 billion, even as the wider market cratered. The vaccine could be given to volunteers by the middle of this month.
The turnaround speed was, in fact, awesome. As Bancel put it, it took only 42 days “from the sequence of a virus” for his company to ship vaccine vials to Fauci’s group at the NIH.
Moderna did it by using technology in which genetic information is added to nanoparticles. In this case, the company added the genetic instructions for the “spike” protein the virus uses to fuse with and invade human cells. If injected into a person, nanoparticles like this could cause the body to immunize itself against the real contagion.
At Moderna’s offices in Cambridge, Bancel and others had been tracking the fast-moving outbreak since January. To begin their work, all they’d needed was the sequence of the virus then spreading in Wuhan, China. When Chinese scientists started putting versions online, its scientists grabbed the sequence of the spike protein. Then, at its manufacturing center in Norwood, Massachusetts, it could start making the spike mRNA, adding it to lipid nanoparticles, and putting the result in sterile vials.
During the entire process, Moderna didn’t need—or even want—actual samples of the infectious coronavirus. “What we are doing we can accomplish with the genetic sequence of the virus. So as soon as it was posted, we and everyone else downloaded it,” Moderna president Stephen Hoge said in an interview in January.
Moderna has already made a few experimental vaccines this way, against diseases including the flu, so it could adapt the same manufacturing process to a new threat. It only needed to swap out what RNA it added. “It’s like replacing software rather building a new computer,” says Jacob Becraft, CEO of Strand Therapeutics, which is designing vaccines and cancer treatments with RNA. “That is why Moderna was able to turn that around so quickly.” The company says its approach is safe: it has dosed about 1,000 people in six earlier safety trials for a range of infections. What it hasn’t ever shown, however, is whether its technology actually protects human beings against disease.
“You don’t have a single licensed vaccine with that technology,” a vaccine specialist named Peter Hotez, chief of Baylor University’s National School of Tropical Medicine, said in a congressional hearing on March 5, three days after the White House event.
During his testimony, Hotez, who himself developed a SARS vaccine that never reached human testing, went out of his way to ding companies for raising expectations. “Unfortunately, some of my colleagues in the biotech industry are making inflated claims,” he told the legislators. “There are a lot of press releases from the biotechs, and some of them I am not very happy about.” Moderna did not respond to Hotez’s criticisms or to a question about whether Trump had misunderstood Bancel. “We have no comment at this time,” said Colleen Hussey, a spokesperson for the company.
Types of vaccines There are about a half-dozen basic types of vaccines, including killed viruses, weakened viruses, and vaccines that involve injections of viral proteins. All aim to expose the body to components of the virus so specialized blood cells can make antibodies. Then, when the real infection happens, a person’s immune system will be primed to halt it.
“And all those strategies are being tried against coronavirus,” says Drew Weissman, an expert on RNA vaccines at the University of Pennsylvania. Weissman says a coronavirus “is not a difficult virus to make a vaccine against.” Each technology has pros and cons, and some move more slowly. For instance, the French pharmaceutical giant Sanofi has lined up funding to make a more conventional vaccine which it says it will take six months to create. Tests on people couldn’t happen until 2021.
What makes mRNA vaccines different—and potentially promising—is that once a company has a way to make them, it’s fast to respond to new threats as they arise, just by altering the gene content. “That is tremendous speed, and that is something RNA vaccines enable, but no one can guarantee that those vaccines will absolutely work,” says Ron Weiss, a synthetic biologist at MIT and a cofounder of Strand. “It’s not going to happen in a couple of months. It’s not going to happen by the summer. It’s a promising but unproven modality. I am excited about it as a modality, but just as with any new modality, you have to be very careful. Do you get enough expression? Does it persist? Does it elicit any adverse responses?” Weissman says the idea of genetic vaccines—using DNA or RNA—is 30 years old, but tests have revealed unwanted immune reactions and, in some cases, lack of potent enough effects. Those problems have not been entirely overcome, says Weissman, who invented a chemical improvement that his university licensed to Moderna and BioNTech, a German biotech he currently works with.
Moderna has published only two results so far, he says, both from safety trials of influenza vaccines, which he considers a mixed success because the vaccines didn’t generate as much immunity as hoped. Weissman believes contaminants of impure RNA in the preparation may be to blame.
“There are two stories: what we see in animals and what Moderna has put into people. What we see in animals is a really potent response, in every animal through mice and monkeys,” he says. “While the Moderna trials weren’t terrible—the responses were better than a standard vaccine—they were much lower than expected.” Moderna’s new coronavirus vaccine candidate could run into similar problems, and even though it’s first out of the gates, it could be overtaken by more conventional vaccines if those prove more effective. “Usually when you invest in something new, you want it to be better,” he says. “Otherwise how would you replace what is old?” Safety test Moderna’s technology, however, is almost certain to be the first coronavirus vaccine tried in humans. The Boston Globe reported that the NIH is already recruiting volunteers for the Phase I safety trial , and the first volunteer could get a shot by mid-month at the Kaiser Permanente Washington Health Research Institute in Seattle, a city rocked by a coronavirus outbreak.
Doctors will monitor the healthy volunteers for reactions and check to see if their bodies start producing antibodies against the virus. Researchers can take their blood and see if it “neutralizes” the virus in laboratory tests. Depending on the level of antibodies in their blood serum, those antibodies should attach to the spike protein and block the virus from entering cells.
If that safety test goes smoothly, it may be possible to begin Phase 2 trials by summer to determine whether vaccinated people are protected from the contagion. However, that will involve dosing hundreds or thousands of people near an outbreak and at risk of infection, says Fauci.
“You do that in areas where there is an active infection, so you are really talking a year, a year and half, before you know something works,” Fauci said to Howard Bauchner, the editor of the Journal of the American Medical Association, in a podcast aired last week.
A vaccine won’t save us As of last week, the number of coronavirus cases worldwide had surpassed 113,000, with cases in 34 US states. Over the weekend the World Health Organization again urged countries to slow the spread with “robust containment and control activities,” pointedly adding that “allowing uncontrolled spread should not be a choice of any government.” One downside of faith in an experimental vaccine is the risk that it could lead officials to slow-walk containment steps like restricting travel or closing schools, measures that are already causing economic losses.
Another thing to look for next is whether, and how, the administration tries to fast-track the vaccine effort. Some of the executives at the White House meeting took the chance to say more government money would help pay for manufacturing plants, among other needs, while others suggested to Trump that the US Food and Drug Administration could expedite testing in some fashion.
Although no one said they wished to distribute a vaccine that has not been fully proven, by telling Trump it’s time to build factories and cut red tape, the executives may have put that idea on the table.
Fauci has since taken opportunities to warn against such a step. While the FDA has ways to speed projects, any move to skip the collection of scientific evidence and give an unproven a vaccine to healthy people could easily backfire.
That’s in part because vaccines can sometime make diseases worse, not better. Hotez says the effect is called “immune enhancement,” and that he saw it with one version of his SARS vaccine, which sickened mice.
In his podcast with JAMA, Fauci cautioned about what could occur if you “get what you think is a vaccine, and just give it to people.” Because vaccine recipients are healthy, there’s not much margin for error: “So we are not going to have a vaccine in the immediate future, which tells us we have to rely on the public measures.” hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain.
By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative.
By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
154 | 2,018 | "China’s giant transmission grid could be the key to cutting climate emissions | MIT Technology Review" | "https://www.technologyreview.com/s/612390/chinas-giant-transmission-grid-could-be-the-key-to-cutting-climate-emissions" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts China’s giant transmission grid could be the key to cutting climate emissions By James Temple archive page Conceptual illustration showing a map of the world and an animated transmission grid Franziska Barczyk In early February, Chinese workers began assembling a soaring red-and-white transmission tower on the eastern edge of the nation's Anhui province. The men straddled metal tubes as they tightened together latticed sections suspended high above the south bank of the Yangtze River.
The workers were erecting a critical component of the world’s first 1.1-million volt transmission line, at a time when US companies are struggling to build anything above 500,000 volts. Once the government-owned utility, State Grid of China, completes the project next year, the line will stretch from the Xinjiang region in the northwest to Anhui in the east, connecting power plants deep in the interior of the country to cities near the coast.
The transmission line will be capable of delivering the output of 12 large power plants over nearly 2,000 miles (3,200 kilometers), sending 50% more electricity 600 miles further than anything that’s ever been built. (Higher-voltage lines can carry electricity over longer distances with lower transmission losses.) As one foreign equipment provider for the project boasts , the line could ship electricity from Beijing to Bangkok—which, as it happens, only hints at State Grid’s rising global ambitions.
The company initially developed and built ultra-high-voltage lines to meet the swelling energy appetites across the sprawling nation, where high mountains and vast distances separate population centers from coal, hydroelectric, wind, and solar resources. But now State Grid is pursuing a far more ambitious goal: to stitch together the electricity systems of neighboring nations into transcontinental “supergrids” capable of swapping energy across borders and oceans.
These massive networks could help slash climate emissions by enabling fluctuating renewable sources like wind and solar to generate a far larger share of the electricity used by these countries. The longer, higher-capacity lines make it possible to balance out the dimming sun in one time zone with, say, wind, hydroelectric, or geothermal energy several zones away.
Politics and bureaucracy have stymied the deployment of such immense, modern power grids in much of the world. In the United States, it can take more than a decade to secure the necessary approvals for the towers, wires, and underground tubes that cut across swaths of federal, national, state, county, and private lands—on the rare occasion when they get approved at all.
“A long-distance interconnected transmission grid is a big piece of the climate puzzle,” says Steven Chu, the former US energy secretary, who serves as vice chairman of the nonprofit that State Grid launched in 2016 to promote international grid connections. “China is saying ‘We want to be leaders in all these future technologies’ instead of looking in the rear-view mirror like the United States seems to be doing at the moment.” But facilitating the greater use of renewables clearly isn’t China’s only, or even primary, motivation. Transmission infrastructure is a strategic piece of the Belt and Road Initiative, China’s multitrillion-dollar effort to build development projects and trade relationships across dozens of nations. Stretching its ultra-high-voltage wires around the world promises to extend the nation’s swelling economic, technological, and political power.
23,000 miles of wires State Grid is probably the biggest company you’ve never heard of, with nearly 1 million employees and 1.1 billion customers. Last year, it reported $9.5 billion in profits on $350 billion in revenue, making it the second-largest company on Fortune’s Global 500 list.
State Grid is already the biggest power distributor in Brazil, where it built its first (and still only) overseas ultra-high-voltage line. The company has also snapped up stakes in national transmission companies in Australia, Greece, Italy, the Philippines, and Portugal. Meanwhile, it’s pushing ahead on major projects in Egypt, Ethiopia, Mozambique, and Pakistan and continues to bid for shares in other European utilities.
“A lot of Chinese companies are very ambitious in spreading overseas,” says Simon Nicholas, a co-author of a report tracking these investments by the Institute for Energy Economics and Financial Analysis, a US think tank. “But State Grid is on another level.” State Grid was created in late 2002, when the government broke up a massive monopoly, the State Power Corporation of China, into 11 smaller power generation and distribution companies. That regulatory unbundling was designed to introduce competition and accelerate development as the nation struggled to meet rising energy demands and halt recurrent blackouts. But State Grid was by far the larger of two resulting transmission companies, and it operates as an effective monopoly across nearly 90% of the nation.
In 2004, the Communist Party handpicked Liu Zhenya, the former head of Shandong province’s power bureau, to replace the retiring chief executive of State Grid. Liu, a savvy operator with a talent for navigating party politics, almost immediately began to lobby hard for ultra-high-voltage projects, according to Sinews of Power: The Politics of the State Grid Corporation of China by Xu Yi-Chong, a professor at Griffith University in Australia.
Lines capable of sending more energy over greater distances could stitch together the nation’s fragmented grids, instantly delivering excess electricity from one province to another in need, Liu argued. Later, as China came under growing pressure to clean up pollution and greenhouse-gas emissions, State Grid’s rationale evolved: the power lines became a way to accommodate the growing amount of renewable energy generation.
From the start, critics asserted that State Grid was pushing ultra-high-voltage transmission primarily as a means of consolidating its dominant position, or that the new technology was an expensive and risky way of shoring up rickety energy infrastructure.
But Liu’s arguments won out: early projects were approved and built, and party leaders soon prioritized ultra-high-voltage technology in China’s influential five-year plans.
The company at first collaborated closely with foreign firms developing transmission technology, including Sweden’s ABB and Germany’s Siemens, and it continues to buy some equipment from them. But it quickly assimilated the expertise of its partners and began developing its own technology , including high-voltage transformers as well as lines that can function at very high altitudes and very low temperatures. State Grid has also developed software that can precisely control the voltage and frequency arriving at destination points throughout the network, enabling the system to react rapidly and automatically to shifting levels of supply and demand.
The company switched on its first million-volt alternating current line in 2009 and the world’s inaugural 800,000-volt direct current line in 2010. State Grid, and by extension China, is now by far the world’s biggest builder of these lines. By the end of 2017, 21 ultra-high-voltage lines had been completed in the country, with four more under construction, Liu said during a presentation at Harvard University in April.
Collectively, they’ll stretch nearly 23,000 miles and be capable of delivering some 150 gigawatts of electricity—roughly the output of 150 nuclear reactors.
At the end of last year, China had poured at least 400 billion yuan ($57 billion) into the projects, according to Bloomberg New Energy Finance. After a slowdown in new project approvals during the last two years, China’s National Energy Administration said in September that it will sign off on 12 new ultra-high-voltage projects by the end of 2019.
“The fact of the matter is, the Chinese are the only ones seriously building it at this point,” says Christopher Clack, chief executive of Vibrant Clean Energy and a former researcher with the US National Oceanic and Atmospheric Administration. In a study published in Nature in 2016, Clack found that using high-voltage direct-current lines to integrate the US grid could cut electricity emissions to 80% below 1990 levels within 15 years (see “ How to get Wyoming wind to California, and cut 80% of US carbon emissions”).
Going global In late February of 2016, Liu walked to the lectern at an energy conference in Houston and announced an audacious plan: using ultra-high-voltage technology to build an energy network that would circle the globe.
By interconnecting transmission infrastructure across oceans and continents, in much the way we've intertwined the internet, the world could tap into vast stores of wind power at the North Pole and solar along the equator, he said. This would clean up global electricity generation, cut energy costs, and even ease international tensions.
“Eventually, our world will turn into a peaceful and harmonious global village, a community of common destiny for all mankind with sufficient energy, blue skies, and green land,” he said.
Of course, such a global grid won’t happen. It would cost more than $50 trillion and require unprecedented—and unrealistic—levels of international trust and cooperation. Moreover, few nations are clamoring for these kinds of high-voltage lines even within their boundaries.
A handful of countries already exchange electricity through standard transmission lines, but efforts to share renewable resources across wide regions have largely gone nowhere.
Among the notable failures is the Desertec Industrial Initiative, an effort backed by Siemens and Deutsche Bank a decade ago to power North African, Middle Eastern, and European electricity grids with solar power from the Sahara.
But State Grid’s global grid plan is basically a sales pitch for its long-distance transmission lines, promoting them as a fundamental enabling technology for the clean-energy transition. If all the company ever achieves are the opening moves in the vision of global interconnectivity, and it develops regional grids connecting a handful of nations, it could still make a lot of money.
Notably, at a conference in Beijing the month after Liu’s speech, the company signed a deal with Korea Electric Power, Japan’s Softbank, and Russian power company Rosseti to collaborate on the development of a Northeast Asian “supergrid” connecting those nations and Mongolia.
Softbank boss Masayoshi Son had proposed a version of the supergrid independent of State Grid back in 2011, after the Fukushima nuclear catastrophe underscored the fragility of Japan’s electricity sector.
Kenichi Yuasa, a spokesperson for the conglomerate, said feasibility studies completed in 2016 and 2017 showed that grid connections between Mongolia, China, Korea, and Japan, as well as a route between Russia and Japan, are both “technically and economically feasible.” “We, as a commercial developer, are ready to execute the projects and would like to deliver tangible progress before Tokyo Olympics in 2020,” he said in an e-mail.
In a response to inquiries from MIT Technology Review, State Grid disputed the argument that the broader global interconnection plan won't happen, or that its driving motivations are primarily financial and geopolitical.
"The great success of UHV technology application in China represents a major innovation of power transmission technology," the company said in a statement. "State Grid would like to share this kind of technological innovation with the rest of the world, addressing a possible solution to vital concerns for humankind for example, environmental pollution, climate change, and lack of access to electricity supply." Cleaning up or cleaning up ? In fact, though China has built far more ultra-high-voltage lines than any other country in the world, its own grid is still something of a mess. The country is struggling to efficiently balance its power production and demand, and to distribute electricity where and when it is needed. One result is that it isn’t making full use of its existing renewable-power plants. A recent MIT paper noted that China’s rates of renewable curtailment—the term for when plants are throttled down because of inadequate demand—are the highest in the world and getting higher.
Part of the problem is that it’s easier and more lucrative to use “predictable electrons” from sources like coal or nuclear, which provide a constant stream of electricity, than the variable generation from renewables, says Valerie Karplus, former director of the Tsinghua-MIT China Energy and Climate Project. Mandatory quotas for fossil-fuel plants and provincial politics also distort allocation decisions, she adds.
Less than half of the ultra-high-voltage lines built or planned to date in China are intended to transmit electricity from renewable sources, according to a late-2017 report by Bloomberg New Energy Finance.
“Getting the most out of wind, solar, and other intermittent sources will require rethinking how to make grid operations more flexible and responsive,” Karplus said in an e-mail.
Despite its purported green ambitions, State Grid itself has resisted the broader market reforms that would be necessary to lessen China’s dependence on fossil-fuel plants. All of which raises questions about the company’s commitment to cutting greenhouse-gas emissions, and how much the long-distance lines will really help to clean up power generation elsewhere.
Tellingly, State Grid’s main target markets are in poor countries where fossil-fuel plants dominate and Chinese companies are busy building hundreds of new coal plants. So there’s little reason to expect that any ultra-high-voltage lines built there would primarily carry energy from renewable sources anytime soon.
“I haven’t seen anything that would make me think this is part of a green-development initiative,” says Jonas Nahm, who studies China’s energy policy at the Johns Hopkins School of Advanced International Studies. “I think State Grid just wants to sell these things anywhere and dominate with its own standards over those developed by Siemens and other companies.” He believes State Grid’s broader ambitions are tied to the Belt and Road Initiative , through which China’s state banks are plowing trillions into infrastructure projects across Asia and Africa in an effort to sell Chinese goods and strengthen the country’s geopolitical influence. Building, owning, or operating another nation’s critical infrastructure—be it seaports or transmission lines—offers a particularly effective route to exercise soft and sometimes not-so-soft power. “This is really a battle over the developing world,” Nahm says.
hide by James Temple Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2019 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Climate change and energy Think that your plastic is being recycled? Think again.
Plastic is cheap to make and shockingly profitable. It’s everywhere. And we’re all paying the price.
By Douglas Main archive page 15 Climate Tech Companies to Watch By Amy Nordrum archive page 2023 Climate Tech Companies to Watch: Blue Frontier and its energy-efficient AC The startup's AC units suck moisture out of the air for more efficient cooling.
By Amy Nordrum archive page Oyster fight: The humble sea creature could hold the key to restoring coastal waters. Developers hate it.
Revitalizing oyster farms and wild oyster reefs could undo decades of environmental destruction on our coasts By Anna Kramer archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
155 | 2,017 | "Volodymyr Mnih | MIT Technology Review" | "https://www.technologyreview.com/innovator/volodymyr-mnih" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
156 | 2,015 | "Tallis Gomes | MIT Technology Review" | "https://www.technologyreview.com/innovator/tallis-gomes" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Entrepreneurs Meet the people who are taking innovations like CRISPR and flexible electronics and turning them into businesses.
Full list Categories Past Years Age: 30 Affiliation: Singu Tallis Gomes An “Uber for beauty.” Tallis Gomes had spent four years as the CEO of EasyTaxi, the “Uber of Brazil,” when he decided in 2015 to aim the same concept in a new direction—the beauty industry.
His on-demand services platform, called Singu, allows customers to summon a masseuse, manicurist, or other beauty professional to their home or office. Scheduling is done by an algorithm factoring in data from Singu and third parties, including location and weather. The professionals see fewer customers than they would in a shop, but they make more money because they don’t have to cover the overhead. Gomes says the algorithm can get a manicurist as many as 110 customers in a month, and earnings of $2,000—comparable to what a lawyer or junior engineer might make.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Innovate Ventures, IBM Research Africa Abdigani Diriye A computer scientist who founded Somalia’s first incubator and startup accelerator.
“Like many Somalis, I ended up fleeing my homeland because of the civil war, back in the late 1980s. At age five I moved to the U.K. because I had family there and was able to get asylum. I grew up in a fairly nice part of London and went on to get a PhD in computer science at University College London.
“At university I started becoming more aware of the world and realized I was quite fortunate to be where I am, to have had all the opportunities that I did. So, in 2012, I helped start an organization called Innovate Ventures to train and support Somali techies. The first program we ran was a two-week coding camp in Somalia for about 15 people. Though the impact was small at the time, for those individuals it meant something, and it was my first time going back to the continent; I hadn’t visited in more than two decades.
“I started to think how Innovate Ventures could have a much bigger impact. In 2015, we teamed up with two nonprofits that were running employment training for Somali youths, found some promising startups, and put them through a series of sessions on marketing, accounting, and product design. Five startups came out of that five-month incubator, and we awarded one winner around $2,500 in seed money to help kick-start its business.
“The next year saw us partner with Oxfam, VC4Africa [an online venture-capital community focused on Africa], and Telesom [the largest telco in Somaliland], and we ran a 10-week accelerator for startups. We were hoping to get 40 to 50 applicants, but we ended up getting around 180. We chose 12 startups for a two-week bootcamp and 10 to participate in the full 10-week training and mentoring program. The top four received a total of $15,000 in funding.
“This year, the accelerator will be 12 weeks long, and we’ve received almost 400 applicants. There are some large Somali companies that are interested in investing in startups and we want to bring them on board to help catalyze the startup scene. We also hope to persuade the Somali diaspora, including some of my colleagues at IBM, to donate their skills and invest in the local technology scene.
“Countries like Kenya and Rwanda have initiatives to become technology and innovation hubs in Africa. Somaliland and Somalia face fundamental challenges in health care, education, and agriculture, but innovation, technology, and startups have the potential to fast-track the country's development. I think we’ve started to take steps in that direction with the programs we’ve been running, and we’re slowly changing the impression people have when they view Somalia and Somaliland.” —as told to Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 30 Affiliation: Wafa Games Kathy Gong Developing new models for entrepreneurship in China.
Kathy Gong became a chess master at 13, and four years later she boarded a plane with a one-way ticket to New York City to attend Columbia University. She knew little English at the time but learned as she studied, and after graduation she returned to China, where she soon became a standout among a rising class of fearless young technology entrepreneurs. Gong has launched a series of companies in different industries. One is Law.ai, a machine-learning company that created both a robotic divorce lawyer called Lily and a robotic visa and immigration lawyer called Mike. Now Gong and her team have founded a new company called Wafa Games that’s aiming to test the Middle East market, which Gong says most other game companies are ignoring.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Caribou Biosciences Rachel Haurwitz Overseeing the commercialization of the promising gene-editing method called CRISPR.
Rachel Haurwitz quickly went from lab rat to CEO at the center of the frenzy over CRISPR, the breakthrough gene-editing technology. In 2012 she’d been working at Jennifer Doudna’s lab at the University of California, Berkeley, when it made a breakthrough showing how to edit any DNA strand using CRISPR. Weeks later, Haurwitz traded the lab’s top-floor views of San Francisco Bay for a sub-basement office with no cell coverage and one desk. There she became CEO of Caribou Biosciences, a spinout that has licensed Berkeley’s CRISPR patents and has made deals with drug makers, research firms, and agricultural giants like DuPont. She now oversees a staff of 44 that spends its time improving the core gene-editing technology. One recent development: a tool called SITE-Seq to help spot when CRISPR makes mistakes.
—Antonio Regalado by Antonio Regalado Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Royole Bill Liu His flexible components could change the way people use electronics.
Bill Liu thinks he can do something Samsung, LG, and Lenovo can’t: manufacture affordable, flexible electronics that can be bent, folded, or rolled up into a tube.
Other researchers and companies have had similar ideas, but Liu moved fast to commercialize his vision. In 2012, he founded a startup called Royole , and in 2014 the company—under his leadership as CEO—unveiled the world’s thinnest flexible display. Compared with rival technologies that can be curved into a fixed shape but aren’t completely pliable, Royole’s displays are as thin as an onion skin and can be rolled tightly around a pen. They can also be fabricated using simpler manufacturing processes, at lower temperatures, which allows Royole to make them at lower cost than competing versions. The company operates its own factory in Shenzhen, China, and is finishing construction on a 1.1-million-square-foot campus nearby. Once complete, the facility will produce 50 million flexible panels a year, says Royole.
Liu dreams of creating an all-in-one computing device that would combine the benefits of a watch, smartphone, tablet, and TV. “I think our flexible displays and sensors will eventually make that possible,” he says. For now, users will have to settle for a $799 headset that they can don like goggles to watch movies and video games in 3-D.
—Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: AutoX Jianxiong Xiao His company AutoX aims to make self-driving cars more accessible.
Jianxiong Xiao aims to make self-driving cars as widely accessible as computers are today. He’s the founder and CEO of AutoX, which recently demonstrated an autonomous car built not with expensive laser sensors but with ordinary webcams and some sophisticated computer-vision algorithms. Remarkably, the vehicle can navigate even at night and in bad weather.
AutoX hasn’t revealed details of its software, but Xiao is an expert at using deep learning, an AI technique that lets machines teach themselves to perform difficult tasks such as recognizing pedestrians from different angles and in different lighting.
Growing up without much money in Chaozhou, a city in eastern China, Xiao became mesmerized by books about computers—fantastic-sounding machines that could encode knowledge, logic, and reason. Without access to the real thing, he taught himself to touch-type on a keyboard drawn on paper.
The soft-spoken entrepreneur asks people to call him “Professor X” rather than struggle to pronounce his name. He’s published dozens of papers demonstrating clever ways of teaching machines to understand and interact with the world. Last year, Xiao showed how an autonomous car could learn about salient visual features of the real world by contrasting features shown in Google Maps with images from Google Street View.
—Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
157 | 2,017 | "Svenja Hinderer | MIT Technology Review" | "https://www.technologyreview.com/innovator/svenja-hinderer" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
158 | 2,015 | "Radha Boya | MIT Technology Review" | "https://www.technologyreview.com/innovator/radha-boya" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
159 | 2,017 | "Phillipa Gill | MIT Technology Review" | "https://www.technologyreview.com/innovator/phillipa-gill" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
160 | 2,017 | "Olga Russakovsky | MIT Technology Review" | "https://www.technologyreview.com/innovator/olga-russakovsky" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
161 | 2,017 | "Michael Saliba | MIT Technology Review" | "https://www.technologyreview.com/innovator/michael-saliba" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
162 | 2,017 | "Lorenz Meier | MIT Technology Review" | "https://www.technologyreview.com/innovator/lorenz-meier" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
163 | 2,017 | "Kathy Gong | MIT Technology Review" | "https://www.technologyreview.com/innovator/kathy-gong" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Entrepreneurs Meet the people who are taking innovations like CRISPR and flexible electronics and turning them into businesses.
Full list Categories Past Years Age: 30 Affiliation: Wafa Games Kathy Gong Developing new models for entrepreneurship in China.
Kathy Gong became a chess master at 13, and four years later she boarded a plane with a one-way ticket to New York City to attend Columbia University. She knew little English at the time but learned as she studied, and after graduation she returned to China, where she soon became a standout among a rising class of fearless young technology entrepreneurs. Gong has launched a series of companies in different industries. One is Law.ai, a machine-learning company that created both a robotic divorce lawyer called Lily and a robotic visa and immigration lawyer called Mike. Now Gong and her team have founded a new company called Wafa Games that’s aiming to test the Middle East market, which Gong says most other game companies are ignoring.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Innovate Ventures, IBM Research Africa Abdigani Diriye A computer scientist who founded Somalia’s first incubator and startup accelerator.
“Like many Somalis, I ended up fleeing my homeland because of the civil war, back in the late 1980s. At age five I moved to the U.K. because I had family there and was able to get asylum. I grew up in a fairly nice part of London and went on to get a PhD in computer science at University College London.
“At university I started becoming more aware of the world and realized I was quite fortunate to be where I am, to have had all the opportunities that I did. So, in 2012, I helped start an organization called Innovate Ventures to train and support Somali techies. The first program we ran was a two-week coding camp in Somalia for about 15 people. Though the impact was small at the time, for those individuals it meant something, and it was my first time going back to the continent; I hadn’t visited in more than two decades.
“I started to think how Innovate Ventures could have a much bigger impact. In 2015, we teamed up with two nonprofits that were running employment training for Somali youths, found some promising startups, and put them through a series of sessions on marketing, accounting, and product design. Five startups came out of that five-month incubator, and we awarded one winner around $2,500 in seed money to help kick-start its business.
“The next year saw us partner with Oxfam, VC4Africa [an online venture-capital community focused on Africa], and Telesom [the largest telco in Somaliland], and we ran a 10-week accelerator for startups. We were hoping to get 40 to 50 applicants, but we ended up getting around 180. We chose 12 startups for a two-week bootcamp and 10 to participate in the full 10-week training and mentoring program. The top four received a total of $15,000 in funding.
“This year, the accelerator will be 12 weeks long, and we’ve received almost 400 applicants. There are some large Somali companies that are interested in investing in startups and we want to bring them on board to help catalyze the startup scene. We also hope to persuade the Somali diaspora, including some of my colleagues at IBM, to donate their skills and invest in the local technology scene.
“Countries like Kenya and Rwanda have initiatives to become technology and innovation hubs in Africa. Somaliland and Somalia face fundamental challenges in health care, education, and agriculture, but innovation, technology, and startups have the potential to fast-track the country's development. I think we’ve started to take steps in that direction with the programs we’ve been running, and we’re slowly changing the impression people have when they view Somalia and Somaliland.” —as told to Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 30 Affiliation: Singu Tallis Gomes An “Uber for beauty.” Tallis Gomes had spent four years as the CEO of EasyTaxi, the “Uber of Brazil,” when he decided in 2015 to aim the same concept in a new direction—the beauty industry.
His on-demand services platform, called Singu, allows customers to summon a masseuse, manicurist, or other beauty professional to their home or office. Scheduling is done by an algorithm factoring in data from Singu and third parties, including location and weather. The professionals see fewer customers than they would in a shop, but they make more money because they don’t have to cover the overhead. Gomes says the algorithm can get a manicurist as many as 110 customers in a month, and earnings of $2,000—comparable to what a lawyer or junior engineer might make.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Caribou Biosciences Rachel Haurwitz Overseeing the commercialization of the promising gene-editing method called CRISPR.
Rachel Haurwitz quickly went from lab rat to CEO at the center of the frenzy over CRISPR, the breakthrough gene-editing technology. In 2012 she’d been working at Jennifer Doudna’s lab at the University of California, Berkeley, when it made a breakthrough showing how to edit any DNA strand using CRISPR. Weeks later, Haurwitz traded the lab’s top-floor views of San Francisco Bay for a sub-basement office with no cell coverage and one desk. There she became CEO of Caribou Biosciences, a spinout that has licensed Berkeley’s CRISPR patents and has made deals with drug makers, research firms, and agricultural giants like DuPont. She now oversees a staff of 44 that spends its time improving the core gene-editing technology. One recent development: a tool called SITE-Seq to help spot when CRISPR makes mistakes.
—Antonio Regalado by Antonio Regalado Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Royole Bill Liu His flexible components could change the way people use electronics.
Bill Liu thinks he can do something Samsung, LG, and Lenovo can’t: manufacture affordable, flexible electronics that can be bent, folded, or rolled up into a tube.
Other researchers and companies have had similar ideas, but Liu moved fast to commercialize his vision. In 2012, he founded a startup called Royole , and in 2014 the company—under his leadership as CEO—unveiled the world’s thinnest flexible display. Compared with rival technologies that can be curved into a fixed shape but aren’t completely pliable, Royole’s displays are as thin as an onion skin and can be rolled tightly around a pen. They can also be fabricated using simpler manufacturing processes, at lower temperatures, which allows Royole to make them at lower cost than competing versions. The company operates its own factory in Shenzhen, China, and is finishing construction on a 1.1-million-square-foot campus nearby. Once complete, the facility will produce 50 million flexible panels a year, says Royole.
Liu dreams of creating an all-in-one computing device that would combine the benefits of a watch, smartphone, tablet, and TV. “I think our flexible displays and sensors will eventually make that possible,” he says. For now, users will have to settle for a $799 headset that they can don like goggles to watch movies and video games in 3-D.
—Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: AutoX Jianxiong Xiao His company AutoX aims to make self-driving cars more accessible.
Jianxiong Xiao aims to make self-driving cars as widely accessible as computers are today. He’s the founder and CEO of AutoX, which recently demonstrated an autonomous car built not with expensive laser sensors but with ordinary webcams and some sophisticated computer-vision algorithms. Remarkably, the vehicle can navigate even at night and in bad weather.
AutoX hasn’t revealed details of its software, but Xiao is an expert at using deep learning, an AI technique that lets machines teach themselves to perform difficult tasks such as recognizing pedestrians from different angles and in different lighting.
Growing up without much money in Chaozhou, a city in eastern China, Xiao became mesmerized by books about computers—fantastic-sounding machines that could encode knowledge, logic, and reason. Without access to the real thing, he taught himself to touch-type on a keyboard drawn on paper.
The soft-spoken entrepreneur asks people to call him “Professor X” rather than struggle to pronounce his name. He’s published dozens of papers demonstrating clever ways of teaching machines to understand and interact with the world. Last year, Xiao showed how an autonomous car could learn about salient visual features of the real world by contrasting features shown in Google Maps with images from Google Street View.
—Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
164 | 2,017 | "Joshua Browder | MIT Technology Review" | "https://www.technologyreview.com/innovator/joshua-browder" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
165 | 2,017 | "Jianxiong Xiao | MIT Technology Review" | "https://www.technologyreview.com/innovator/jianxiong-xiao" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Entrepreneurs Meet the people who are taking innovations like CRISPR and flexible electronics and turning them into businesses.
Full list Categories Past Years Age: 33 Affiliation: AutoX Jianxiong Xiao His company AutoX aims to make self-driving cars more accessible.
Jianxiong Xiao aims to make self-driving cars as widely accessible as computers are today. He’s the founder and CEO of AutoX, which recently demonstrated an autonomous car built not with expensive laser sensors but with ordinary webcams and some sophisticated computer-vision algorithms. Remarkably, the vehicle can navigate even at night and in bad weather.
AutoX hasn’t revealed details of its software, but Xiao is an expert at using deep learning, an AI technique that lets machines teach themselves to perform difficult tasks such as recognizing pedestrians from different angles and in different lighting.
Growing up without much money in Chaozhou, a city in eastern China, Xiao became mesmerized by books about computers—fantastic-sounding machines that could encode knowledge, logic, and reason. Without access to the real thing, he taught himself to touch-type on a keyboard drawn on paper.
The soft-spoken entrepreneur asks people to call him “Professor X” rather than struggle to pronounce his name. He’s published dozens of papers demonstrating clever ways of teaching machines to understand and interact with the world. Last year, Xiao showed how an autonomous car could learn about salient visual features of the real world by contrasting features shown in Google Maps with images from Google Street View.
—Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Innovate Ventures, IBM Research Africa Abdigani Diriye A computer scientist who founded Somalia’s first incubator and startup accelerator.
“Like many Somalis, I ended up fleeing my homeland because of the civil war, back in the late 1980s. At age five I moved to the U.K. because I had family there and was able to get asylum. I grew up in a fairly nice part of London and went on to get a PhD in computer science at University College London.
“At university I started becoming more aware of the world and realized I was quite fortunate to be where I am, to have had all the opportunities that I did. So, in 2012, I helped start an organization called Innovate Ventures to train and support Somali techies. The first program we ran was a two-week coding camp in Somalia for about 15 people. Though the impact was small at the time, for those individuals it meant something, and it was my first time going back to the continent; I hadn’t visited in more than two decades.
“I started to think how Innovate Ventures could have a much bigger impact. In 2015, we teamed up with two nonprofits that were running employment training for Somali youths, found some promising startups, and put them through a series of sessions on marketing, accounting, and product design. Five startups came out of that five-month incubator, and we awarded one winner around $2,500 in seed money to help kick-start its business.
“The next year saw us partner with Oxfam, VC4Africa [an online venture-capital community focused on Africa], and Telesom [the largest telco in Somaliland], and we ran a 10-week accelerator for startups. We were hoping to get 40 to 50 applicants, but we ended up getting around 180. We chose 12 startups for a two-week bootcamp and 10 to participate in the full 10-week training and mentoring program. The top four received a total of $15,000 in funding.
“This year, the accelerator will be 12 weeks long, and we’ve received almost 400 applicants. There are some large Somali companies that are interested in investing in startups and we want to bring them on board to help catalyze the startup scene. We also hope to persuade the Somali diaspora, including some of my colleagues at IBM, to donate their skills and invest in the local technology scene.
“Countries like Kenya and Rwanda have initiatives to become technology and innovation hubs in Africa. Somaliland and Somalia face fundamental challenges in health care, education, and agriculture, but innovation, technology, and startups have the potential to fast-track the country's development. I think we’ve started to take steps in that direction with the programs we’ve been running, and we’re slowly changing the impression people have when they view Somalia and Somaliland.” —as told to Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 30 Affiliation: Singu Tallis Gomes An “Uber for beauty.” Tallis Gomes had spent four years as the CEO of EasyTaxi, the “Uber of Brazil,” when he decided in 2015 to aim the same concept in a new direction—the beauty industry.
His on-demand services platform, called Singu, allows customers to summon a masseuse, manicurist, or other beauty professional to their home or office. Scheduling is done by an algorithm factoring in data from Singu and third parties, including location and weather. The professionals see fewer customers than they would in a shop, but they make more money because they don’t have to cover the overhead. Gomes says the algorithm can get a manicurist as many as 110 customers in a month, and earnings of $2,000—comparable to what a lawyer or junior engineer might make.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 30 Affiliation: Wafa Games Kathy Gong Developing new models for entrepreneurship in China.
Kathy Gong became a chess master at 13, and four years later she boarded a plane with a one-way ticket to New York City to attend Columbia University. She knew little English at the time but learned as she studied, and after graduation she returned to China, where she soon became a standout among a rising class of fearless young technology entrepreneurs. Gong has launched a series of companies in different industries. One is Law.ai, a machine-learning company that created both a robotic divorce lawyer called Lily and a robotic visa and immigration lawyer called Mike. Now Gong and her team have founded a new company called Wafa Games that’s aiming to test the Middle East market, which Gong says most other game companies are ignoring.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Caribou Biosciences Rachel Haurwitz Overseeing the commercialization of the promising gene-editing method called CRISPR.
Rachel Haurwitz quickly went from lab rat to CEO at the center of the frenzy over CRISPR, the breakthrough gene-editing technology. In 2012 she’d been working at Jennifer Doudna’s lab at the University of California, Berkeley, when it made a breakthrough showing how to edit any DNA strand using CRISPR. Weeks later, Haurwitz traded the lab’s top-floor views of San Francisco Bay for a sub-basement office with no cell coverage and one desk. There she became CEO of Caribou Biosciences, a spinout that has licensed Berkeley’s CRISPR patents and has made deals with drug makers, research firms, and agricultural giants like DuPont. She now oversees a staff of 44 that spends its time improving the core gene-editing technology. One recent development: a tool called SITE-Seq to help spot when CRISPR makes mistakes.
—Antonio Regalado by Antonio Regalado Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Royole Bill Liu His flexible components could change the way people use electronics.
Bill Liu thinks he can do something Samsung, LG, and Lenovo can’t: manufacture affordable, flexible electronics that can be bent, folded, or rolled up into a tube.
Other researchers and companies have had similar ideas, but Liu moved fast to commercialize his vision. In 2012, he founded a startup called Royole , and in 2014 the company—under his leadership as CEO—unveiled the world’s thinnest flexible display. Compared with rival technologies that can be curved into a fixed shape but aren’t completely pliable, Royole’s displays are as thin as an onion skin and can be rolled tightly around a pen. They can also be fabricated using simpler manufacturing processes, at lower temperatures, which allows Royole to make them at lower cost than competing versions. The company operates its own factory in Shenzhen, China, and is finishing construction on a 1.1-million-square-foot campus nearby. Once complete, the facility will produce 50 million flexible panels a year, says Royole.
Liu dreams of creating an all-in-one computing device that would combine the benefits of a watch, smartphone, tablet, and TV. “I think our flexible displays and sensors will eventually make that possible,” he says. For now, users will have to settle for a $799 headset that they can don like goggles to watch movies and video games in 3-D.
—Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
166 | 2,017 | "Jessica Brillhart | MIT Technology Review" | "https://www.technologyreview.com/innovator/jessica-brillhart" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
167 | 2,015 | "Jenna Wiens | MIT Technology Review" | "https://www.technologyreview.com/innovator/jenna-wiens" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
168 | 2,017 | "Ian Goodfellow | MIT Technology Review" | "https://www.technologyreview.com/innovator/ian-goodfellow" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
169 | 2,017 | "Hanqing Wu | MIT Technology Review" | "https://www.technologyreview.com/innovator/hanqing-wu" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
170 | 2,016 | "Gregory Wayne | MIT Technology Review" | "https://www.technologyreview.com/innovator/gregory-wayne" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
171 | 2,017 | "Gene Berdichevsky | MIT Technology Review" | "https://www.technologyreview.com/innovator/gene-berdichevsky" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
172 | 2,017 | "Franziska Roesner | MIT Technology Review" | "https://www.technologyreview.com/innovator/franziska-roesner" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Inventors Creating the breakthroughs that will make everything from AI to solar power to heart valves more practical and essential.
Full list Categories Past Years Age: 31 Affiliation: University of Washington Franziska Roesner Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
—Rachel Metz by Rachel Metz Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Sila Nanotechnologies Gene Berdichevsky Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
— James Temple by James Temple Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Manchester’s Graphene Research Institute Radha Boya The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
— Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Google Brain Team Ian Goodfellow Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
— Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Fraunhofer Institute Svenja Hinderer A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Swiss Federal Institute of Technology Lorenz Meier An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: Princeton University Olga Russakovsky Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.” A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.” If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need? Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.” — Erika Beras by Erika Beras Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Swiss Federal Institute of Technology Michael Saliba Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
— Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Gregory Wayne Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.
— Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
173 | 2,017 | "Fabian Menges | MIT Technology Review" | "https://www.technologyreview.com/innovator/fabian-menges" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
174 | 2,014 | "Bill Liu | MIT Technology Review" | "https://www.technologyreview.com/innovator/bill-liu" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Entrepreneurs Meet the people who are taking innovations like CRISPR and flexible electronics and turning them into businesses.
Full list Categories Past Years Age: 34 Affiliation: Royole Bill Liu His flexible components could change the way people use electronics.
Bill Liu thinks he can do something Samsung, LG, and Lenovo can’t: manufacture affordable, flexible electronics that can be bent, folded, or rolled up into a tube.
Other researchers and companies have had similar ideas, but Liu moved fast to commercialize his vision. In 2012, he founded a startup called Royole , and in 2014 the company—under his leadership as CEO—unveiled the world’s thinnest flexible display. Compared with rival technologies that can be curved into a fixed shape but aren’t completely pliable, Royole’s displays are as thin as an onion skin and can be rolled tightly around a pen. They can also be fabricated using simpler manufacturing processes, at lower temperatures, which allows Royole to make them at lower cost than competing versions. The company operates its own factory in Shenzhen, China, and is finishing construction on a 1.1-million-square-foot campus nearby. Once complete, the facility will produce 50 million flexible panels a year, says Royole.
Liu dreams of creating an all-in-one computing device that would combine the benefits of a watch, smartphone, tablet, and TV. “I think our flexible displays and sensors will eventually make that possible,” he says. For now, users will have to settle for a $799 headset that they can don like goggles to watch movies and video games in 3-D.
—Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Innovate Ventures, IBM Research Africa Abdigani Diriye A computer scientist who founded Somalia’s first incubator and startup accelerator.
“Like many Somalis, I ended up fleeing my homeland because of the civil war, back in the late 1980s. At age five I moved to the U.K. because I had family there and was able to get asylum. I grew up in a fairly nice part of London and went on to get a PhD in computer science at University College London.
“At university I started becoming more aware of the world and realized I was quite fortunate to be where I am, to have had all the opportunities that I did. So, in 2012, I helped start an organization called Innovate Ventures to train and support Somali techies. The first program we ran was a two-week coding camp in Somalia for about 15 people. Though the impact was small at the time, for those individuals it meant something, and it was my first time going back to the continent; I hadn’t visited in more than two decades.
“I started to think how Innovate Ventures could have a much bigger impact. In 2015, we teamed up with two nonprofits that were running employment training for Somali youths, found some promising startups, and put them through a series of sessions on marketing, accounting, and product design. Five startups came out of that five-month incubator, and we awarded one winner around $2,500 in seed money to help kick-start its business.
“The next year saw us partner with Oxfam, VC4Africa [an online venture-capital community focused on Africa], and Telesom [the largest telco in Somaliland], and we ran a 10-week accelerator for startups. We were hoping to get 40 to 50 applicants, but we ended up getting around 180. We chose 12 startups for a two-week bootcamp and 10 to participate in the full 10-week training and mentoring program. The top four received a total of $15,000 in funding.
“This year, the accelerator will be 12 weeks long, and we’ve received almost 400 applicants. There are some large Somali companies that are interested in investing in startups and we want to bring them on board to help catalyze the startup scene. We also hope to persuade the Somali diaspora, including some of my colleagues at IBM, to donate their skills and invest in the local technology scene.
“Countries like Kenya and Rwanda have initiatives to become technology and innovation hubs in Africa. Somaliland and Somalia face fundamental challenges in health care, education, and agriculture, but innovation, technology, and startups have the potential to fast-track the country's development. I think we’ve started to take steps in that direction with the programs we’ve been running, and we’re slowly changing the impression people have when they view Somalia and Somaliland.” —as told to Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 30 Affiliation: Singu Tallis Gomes An “Uber for beauty.” Tallis Gomes had spent four years as the CEO of EasyTaxi, the “Uber of Brazil,” when he decided in 2015 to aim the same concept in a new direction—the beauty industry.
His on-demand services platform, called Singu, allows customers to summon a masseuse, manicurist, or other beauty professional to their home or office. Scheduling is done by an algorithm factoring in data from Singu and third parties, including location and weather. The professionals see fewer customers than they would in a shop, but they make more money because they don’t have to cover the overhead. Gomes says the algorithm can get a manicurist as many as 110 customers in a month, and earnings of $2,000—comparable to what a lawyer or junior engineer might make.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 30 Affiliation: Wafa Games Kathy Gong Developing new models for entrepreneurship in China.
Kathy Gong became a chess master at 13, and four years later she boarded a plane with a one-way ticket to New York City to attend Columbia University. She knew little English at the time but learned as she studied, and after graduation she returned to China, where she soon became a standout among a rising class of fearless young technology entrepreneurs. Gong has launched a series of companies in different industries. One is Law.ai, a machine-learning company that created both a robotic divorce lawyer called Lily and a robotic visa and immigration lawyer called Mike. Now Gong and her team have founded a new company called Wafa Games that’s aiming to test the Middle East market, which Gong says most other game companies are ignoring.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Caribou Biosciences Rachel Haurwitz Overseeing the commercialization of the promising gene-editing method called CRISPR.
Rachel Haurwitz quickly went from lab rat to CEO at the center of the frenzy over CRISPR, the breakthrough gene-editing technology. In 2012 she’d been working at Jennifer Doudna’s lab at the University of California, Berkeley, when it made a breakthrough showing how to edit any DNA strand using CRISPR. Weeks later, Haurwitz traded the lab’s top-floor views of San Francisco Bay for a sub-basement office with no cell coverage and one desk. There she became CEO of Caribou Biosciences, a spinout that has licensed Berkeley’s CRISPR patents and has made deals with drug makers, research firms, and agricultural giants like DuPont. She now oversees a staff of 44 that spends its time improving the core gene-editing technology. One recent development: a tool called SITE-Seq to help spot when CRISPR makes mistakes.
—Antonio Regalado by Antonio Regalado Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: AutoX Jianxiong Xiao His company AutoX aims to make self-driving cars more accessible.
Jianxiong Xiao aims to make self-driving cars as widely accessible as computers are today. He’s the founder and CEO of AutoX, which recently demonstrated an autonomous car built not with expensive laser sensors but with ordinary webcams and some sophisticated computer-vision algorithms. Remarkably, the vehicle can navigate even at night and in bad weather.
AutoX hasn’t revealed details of its software, but Xiao is an expert at using deep learning, an AI technique that lets machines teach themselves to perform difficult tasks such as recognizing pedestrians from different angles and in different lighting.
Growing up without much money in Chaozhou, a city in eastern China, Xiao became mesmerized by books about computers—fantastic-sounding machines that could encode knowledge, logic, and reason. Without access to the real thing, he taught himself to touch-type on a keyboard drawn on paper.
The soft-spoken entrepreneur asks people to call him “Professor X” rather than struggle to pronounce his name. He’s published dozens of papers demonstrating clever ways of teaching machines to understand and interact with the world. Last year, Xiao showed how an autonomous car could learn about salient visual features of the real world by contrasting features shown in Google Maps with images from Google Street View.
—Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
175 | 2,017 | "Austin Russell | MIT Technology Review" | "https://www.technologyreview.com/innovator/austin-russell" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
176 | 2,017 | "Angela Schoellig | MIT Technology Review" | "https://www.technologyreview.com/innovator/angela-schoellig" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Pioneers They’re bringing fresh and unexpected solutions to areas ranging from cancer treatment to Internet security to self-driving cars.
Full list Categories Past Years Age: 34 Affiliation: University of Toronto Angela Schoellig Her algorithms are helping self-driving and self-flying vehicles get around more safely.
Safety never used to be much of a concern with machine-learning systems. Any goof made in image labeling or speech recognition might be annoying, but it wouldn’t put anybody’s life at risk. But autonomous cars, drones, and manufacturing robots have raised the stakes.
Angela Schoellig, who leads the Dynamic Systems Lab at the University of Toronto, has developed learning algorithms that allow robots to learn together and from each other in order to ensure that, for example, a flying robot never crashes into a wall while navigating an unknown place, or that a self-driving vehicle never leaves its lane when driving in a new city. Her work has demonstrably extended the capabilities of today’s robots, enabling self-flying and self-driving vehicles to fly or drive along a predefined path despite uncertainties such as wind, changing payloads, or unknown road conditions.
As a PhD student at the Swiss Federal Institute of Technology in Zurich, Schoellig worked with others to develop the Flying Machine Arena, a 10-cubic-meter space for training drones to fly together in an enclosed area. In 2010, she created a performance in which a fleet of UAVs flew synchronously to music. The “dancing quadrocopter” project, as it became known, used algorithms that allowed the drones to adapt their movements to match the music’s tempo and character and coordinate to avoid collision, without the need for researchers to manually control their flight paths. Her setup decoupled two essential, usually intertwined components of autonomous systems—perception and action—by placing, at the center of the space, a high-precision overhead motion-capture system that can perfectly locate multiple objects at rates exceeding 200 frames per second. This external system enabled the team to concentrate resources on the vehicle-control algorithms.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: Independent filmmaker Jessica Brillhart A pioneer in virtual-reality filmmaking.
Traditional filmmaking techniques often don’t work in virtual reality. So for the past few years, first as the principal filmmaker for virtual reality at Google and now as an independent filmmaker, Jessica Brillhart has been defining what will.
Brillhart recognized early on that in VR, the director’s vision is no longer paramount. A viewer won’t always focus where a filmmaker expects. Brillhart embraces these “acts of visitor rebellion” and says they push her to be “bold and audacious in ways I would never have been otherwise.” She adds: “I love how a frame is no longer the central concept in my work. I can build worlds.” —Caleb Garling by Caleb Garling Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 20 Affiliation: DoNotPay Joshua Browder Using chatbots to help people avoid legal fees.
Joshua Browder is determined to upend the $200 billion legal services market with, of all things, chatbots. He thinks chatbots can automate many of the tasks that lawyers have no business charging a high hourly rate to complete.
“It should never be a hassle to engage in a legal process, and it should never be a question of who can afford to pay,” says Browder. “It should be a question of what’s the right outcome, of getting justice.” Browder started out small in 2015, creating a simple tool called DoNotPay to help people contest parking tickets. He came up with the idea after successfully contesting many of his own tickets, and friends urged him to create an app so they could benefit from his approach.
Browder’s basic “robot lawyer” asks for a few bits of information—which state the ticket was issued in, and on what date—and uses it to generate a form letter asking that the charges be dropped. So far, 375,000 people have avoided about $9.7 million in penalties, he says.
In early July, DoNotPay expanded its portfolio to include 1,000 other relatively discrete legal tasks, such as lodging a workplace discrimination complaint or canceling an online marketing trial. A few days later, it introduced open-source tools that others—including lawyers with no coding experience—could use to create their own chatbots. Warren Agin, an adjunct law professor at Boston College, created one that people who have declared bankruptcy can use to fend off creditors. “Debtors have a lot of legal tools available to them, but they don’t know it,” he says.
Browder has more sweeping plans. He wants to automate, or at least simplify, famously painful legal processes such as applying for political asylum or getting a divorce.
But huge challenges remain. Browder is likely to run into obstacles laid down by lawyers intent on maximizing their billable hours, and by consumers wary of relying too heavily on algorithms rather than flesh-and-blood lawyers.
—Peter Burrows by Peter Burrows Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: University of Massachusetts, Amherst Phillipa Gill An empirical method for measuring Internet censorship.
Five years ago, when Phillipa Gill began a research fellowship at the University of Toronto’s Citizen Lab, she was surprised to find that there was no real accepted approach for empirically measuring censorship. So Gill, now an assistant professor of computer science at the University of Massachusetts, Amherst, built a set of new measurement tools to detect and quantify such practices. One technique automatically detects so-called block pages, which tell a user if a site has been blocked by a government or some other entity. In 2015, Gill and colleagues used her methods to confirm that a state-owned ISP in Yemen was using a traffic-filtering device to block political content during an armed conflict.
—Mike Orcutt by Mike Orcutt Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: IBM Research in Zurich Fabian Menges A method for measuring temperatures at the nanoscale.
Problem: Complex microprocessors — like those at the heart of autonomous driving and artificial intelligence — can overheat and shut down. And when it happens, it’s usually the fault of an internal component on the scale of nanometers. But for decades, nobody who designed chips could figure out a way to measure temperatures down to the scale of such minuscule parts.
Solution: Fabian Menges, a researcher at IBM Research in Zurich, Switzerland, has invented a scanning probe method that measures changes to thermal resistance and variations in the rate at which heat flows through a surface. From this he can determine the temperature of structures smaller than 10 nanometers. This will let chipmakers come up with designs that are better at dissipating heat.
—Russ Juskalian by Russ Juskalian Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: DeepMind Volodymyr Mnih The first system to play Atari games as well as a human can.
Volodymyr Mnih, a research scientist at DeepMind, has created the first system to demonstrate human-level performance in almost 50 Atari 2600 video games, including Pong and Space Invaders. Minh’s system was the first to combine the playful characteristics of reinforcement learning with the rigorous approach of deep learning, which mirrors the way the human brain processes information—learning by example. His software learned to play the games much as a human would, through playful trial and error, while using the game score as a measurement by which to hone and perfect its technique for each game.
—Simon Parkin by Simon Parkin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 22 Affiliation: Luminar Austin Russell Better sensors for safer automated driving.
Most driverless cars use laser sensors, or lidar, to map surroundings in 3-D and spot obstacles. But some cheap new sensors may not be accurate enough for high-speed use. “They’re more suited to a Roomba,” says Austin Russell, who dropped out of Stanford and set up his own lidar company, Luminar. “My biggest fear is that people will prematurely deploy autonomous cars that are unsafe.” Luminar’s device uses longer-wavelength light than other sensors, allowing it to spot dark objects twice as far out. At 70 miles per hour, that’s three extra seconds of warning.
—Jamie Condliffe by Jamie Condliffe Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 31 Affiliation: University of Michigan Jenna Wiens Her computational models identify patients who are most at risk of a deadly infection.
A sizable percentage of hospital patients end up with an infection they didn’t have when they arrived.
Among the most lethal of these is Clostridium difficile.
The bacterium, which spreads easily in hospitals and other health-care facilities, was the source of almost half a million infections among patients in the United States in a single year, according to a 2015 report by the Centers for Disease Control and Prevention. Fifteen thousand deaths were directly attributable to the bug.
Jenna Wiens, an assistant professor of computer science and engineering at the University of Michigan, thinks hospitals could learn to prevent many infections and deaths by taking advantage of the vast amounts of data they already collect about their patients.
“I think to really get all of the value we can out of the data we are collecting, it’s necessary to be taking a machine-learning and a data-mining approach,” she says.
Wiens has developed computational models that use algorithms to search through the data contained in a hospital’s electronic health records system, including patients’ medication prescriptions, their lab results, and the records of procedures that they’ve undergone. The models then tease out the specific risk factors for C. difficile at that hospital.
“A traditional approach would start with a small number of variables that we believe are risk factors and make a model based on those risk factors. Our approach essentially throws everything in that’s available,” Wiens says. It can readily be adapted to different types of data.
Aside from using this information to treat patients earlier or prevent infections altogether, Wiens says, her model could be used to help researchers carry out clinical trials for new treatments, like novel antibiotics. Such studies have been difficult to do in the past for hospital-acquired infections like C. difficile —the infections come on fast so there’s little time to enroll a patient in a trial. But by using Wiens’s model, researchers could identify patients most vulnerable to infections and study the proposed intervention based on that risk.
At a time when health-care costs are rising exponentially, it’s hard to imagine hospitals wanting to spend more money on new machine-learning approaches. But Wiens is hopeful that hospitals will see the value in hiring data scientists to do what she’s doing.
“I think there is a bigger cost to not using the data,” she says. “Patients are dying when they seek medical care and they acquire one of these infections. If we can prevent those, the savings are priceless.” —Emily Mullin by Emily Mullin Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Alibaba Cloud Hanqing Wu A cheaper solution for devastating hacking attacks.
During a distributed denial of service (DDoS) attack, an attacker overwhelms a domain-name server with traffic until it collapses. The traditional way of fending off an attack like this is to pile up bandwidth so the server under attack always has more than enough volume to handle what the attacker has released. But as hackers become capable of attacks with bigger and bigger data volumes, this is no longer feasible.
Since the target of DDoS attacks is a website’s IP address, Hanqing Wu, the chief security scientist at Alibaba Cloud, devised a defense mechanism through which one Web address can be translated into thousands of IP addresses. This “elastic security network” can quickly divert all benign traffic to a new IP address in the face of a DDoS attack. And by eliminating the need to pile up bandwidth, this system would greatly reduce the cost of keeping the Internet safe.
—Yiting Sun by Yiting Sun Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
177 | 2,017 | "Abdigani Diriye | MIT Technology Review" | "https://www.technologyreview.com/innovator/abdigani-diriye" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Jon Han Entrepreneurs Meet the people who are taking innovations like CRISPR and flexible electronics and turning them into businesses.
Full list Categories Past Years Age: 33 Affiliation: Innovate Ventures, IBM Research Africa Abdigani Diriye A computer scientist who founded Somalia’s first incubator and startup accelerator.
“Like many Somalis, I ended up fleeing my homeland because of the civil war, back in the late 1980s. At age five I moved to the U.K. because I had family there and was able to get asylum. I grew up in a fairly nice part of London and went on to get a PhD in computer science at University College London.
“At university I started becoming more aware of the world and realized I was quite fortunate to be where I am, to have had all the opportunities that I did. So, in 2012, I helped start an organization called Innovate Ventures to train and support Somali techies. The first program we ran was a two-week coding camp in Somalia for about 15 people. Though the impact was small at the time, for those individuals it meant something, and it was my first time going back to the continent; I hadn’t visited in more than two decades.
“I started to think how Innovate Ventures could have a much bigger impact. In 2015, we teamed up with two nonprofits that were running employment training for Somali youths, found some promising startups, and put them through a series of sessions on marketing, accounting, and product design. Five startups came out of that five-month incubator, and we awarded one winner around $2,500 in seed money to help kick-start its business.
“The next year saw us partner with Oxfam, VC4Africa [an online venture-capital community focused on Africa], and Telesom [the largest telco in Somaliland], and we ran a 10-week accelerator for startups. We were hoping to get 40 to 50 applicants, but we ended up getting around 180. We chose 12 startups for a two-week bootcamp and 10 to participate in the full 10-week training and mentoring program. The top four received a total of $15,000 in funding.
“This year, the accelerator will be 12 weeks long, and we’ve received almost 400 applicants. There are some large Somali companies that are interested in investing in startups and we want to bring them on board to help catalyze the startup scene. We also hope to persuade the Somali diaspora, including some of my colleagues at IBM, to donate their skills and invest in the local technology scene.
“Countries like Kenya and Rwanda have initiatives to become technology and innovation hubs in Africa. Somaliland and Somalia face fundamental challenges in health care, education, and agriculture, but innovation, technology, and startups have the potential to fast-track the country's development. I think we’ve started to take steps in that direction with the programs we’ve been running, and we’re slowly changing the impression people have when they view Somalia and Somaliland.” —as told to Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 30 Affiliation: Singu Tallis Gomes An “Uber for beauty.” Tallis Gomes had spent four years as the CEO of EasyTaxi, the “Uber of Brazil,” when he decided in 2015 to aim the same concept in a new direction—the beauty industry.
His on-demand services platform, called Singu, allows customers to summon a masseuse, manicurist, or other beauty professional to their home or office. Scheduling is done by an algorithm factoring in data from Singu and third parties, including location and weather. The professionals see fewer customers than they would in a shop, but they make more money because they don’t have to cover the overhead. Gomes says the algorithm can get a manicurist as many as 110 customers in a month, and earnings of $2,000—comparable to what a lawyer or junior engineer might make.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 30 Affiliation: Wafa Games Kathy Gong Developing new models for entrepreneurship in China.
Kathy Gong became a chess master at 13, and four years later she boarded a plane with a one-way ticket to New York City to attend Columbia University. She knew little English at the time but learned as she studied, and after graduation she returned to China, where she soon became a standout among a rising class of fearless young technology entrepreneurs. Gong has launched a series of companies in different industries. One is Law.ai, a machine-learning company that created both a robotic divorce lawyer called Lily and a robotic visa and immigration lawyer called Mike. Now Gong and her team have founded a new company called Wafa Games that’s aiming to test the Middle East market, which Gong says most other game companies are ignoring.
—Nanette Byrnes by Nanette Byrnes Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 32 Affiliation: Caribou Biosciences Rachel Haurwitz Overseeing the commercialization of the promising gene-editing method called CRISPR.
Rachel Haurwitz quickly went from lab rat to CEO at the center of the frenzy over CRISPR, the breakthrough gene-editing technology. In 2012 she’d been working at Jennifer Doudna’s lab at the University of California, Berkeley, when it made a breakthrough showing how to edit any DNA strand using CRISPR. Weeks later, Haurwitz traded the lab’s top-floor views of San Francisco Bay for a sub-basement office with no cell coverage and one desk. There she became CEO of Caribou Biosciences, a spinout that has licensed Berkeley’s CRISPR patents and has made deals with drug makers, research firms, and agricultural giants like DuPont. She now oversees a staff of 44 that spends its time improving the core gene-editing technology. One recent development: a tool called SITE-Seq to help spot when CRISPR makes mistakes.
—Antonio Regalado by Antonio Regalado Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 34 Affiliation: Royole Bill Liu His flexible components could change the way people use electronics.
Bill Liu thinks he can do something Samsung, LG, and Lenovo can’t: manufacture affordable, flexible electronics that can be bent, folded, or rolled up into a tube.
Other researchers and companies have had similar ideas, but Liu moved fast to commercialize his vision. In 2012, he founded a startup called Royole , and in 2014 the company—under his leadership as CEO—unveiled the world’s thinnest flexible display. Compared with rival technologies that can be curved into a fixed shape but aren’t completely pliable, Royole’s displays are as thin as an onion skin and can be rolled tightly around a pen. They can also be fabricated using simpler manufacturing processes, at lower temperatures, which allows Royole to make them at lower cost than competing versions. The company operates its own factory in Shenzhen, China, and is finishing construction on a 1.1-million-square-foot campus nearby. Once complete, the facility will produce 50 million flexible panels a year, says Royole.
Liu dreams of creating an all-in-one computing device that would combine the benefits of a watch, smartphone, tablet, and TV. “I think our flexible displays and sensors will eventually make that possible,” he says. For now, users will have to settle for a $799 headset that they can don like goggles to watch movies and video games in 3-D.
—Elizabeth Woyke by Elizabeth Woyke Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 Age: 33 Affiliation: AutoX Jianxiong Xiao His company AutoX aims to make self-driving cars more accessible.
Jianxiong Xiao aims to make self-driving cars as widely accessible as computers are today. He’s the founder and CEO of AutoX, which recently demonstrated an autonomous car built not with expensive laser sensors but with ordinary webcams and some sophisticated computer-vision algorithms. Remarkably, the vehicle can navigate even at night and in bad weather.
AutoX hasn’t revealed details of its software, but Xiao is an expert at using deep learning, an AI technique that lets machines teach themselves to perform difficult tasks such as recognizing pedestrians from different angles and in different lighting.
Growing up without much money in Chaozhou, a city in eastern China, Xiao became mesmerized by books about computers—fantastic-sounding machines that could encode knowledge, logic, and reason. Without access to the real thing, he taught himself to touch-type on a keyboard drawn on paper.
The soft-spoken entrepreneur asks people to call him “Professor X” rather than struggle to pronounce his name. He’s published dozens of papers demonstrating clever ways of teaching machines to understand and interact with the world. Last year, Xiao showed how an autonomous car could learn about salient visual features of the real world by contrasting features shown in Google Maps with images from Google Street View.
—Will Knight by Will Knight Share facebooklink opens in a new window twitterlink opens in a new window linkedinlink opens in a new window emaillink opens in a new window August 16, 2017 The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
178 | 2,023 | "Endless AI-generated spam risks clogging up Google’s search results - The Verge" | "https://www.theverge.com/2019/7/2/19063562/ai-text-generation-spam-marketing-seo-fractl-grover-google" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Artificial Intelligence / Report Endless AI-generated spam risks clogging up Google’s search results Endless AI-generated spam risks clogging up Google’s search results / A ‘tsunami’ of cheap AI content could cause problems for search engines By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story If you buy something from a Verge link, Vox Media may earn a commission.
See our ethics statement.
Over the past year, AI systems have made huge strides in their ability to generate convincing text , churning out everything from song lyrics to short stories. Experts have warned that these tools could be used to spread political disinformation , but there’s another target that’s equally plausible and potentially more lucrative: gaming Google.
Instead of being used to create fake news, AI could churn out infinite blogs, websites, and marketing spam. The content would be cheap to produce and stuffed full of relevant keywords. But like most AI-generated text, it would only have surface meaning, with little correspondence to the real world. It would be the information equivalent of empty calories, but still potentially difficult for a search engine to distinguish from the real thing.
Just take a look at this blog post answering the question: “What Photo Filters are Best for Instagram Marketing?” At first glance it seems legitimate, with a bland introduction followed by quotes from various marketing types. But read a little more closely and you realize it references magazines, people, and — crucially — Instagram filters that don’t exist: You might not think that a mumford brush would be a good filter for an Insta story. Not so, said Amy Freeborn, the director of communications at National Recording Technician magazine. Freeborn’s picks include Finder (a blue stripe that makes her account look like an older block of pixels), Plus and Cartwheel (which she says makes your picture look like a topographical map of a town.
The rest of the site is full of similar posts, covering topics like “ How to Write Clickbait Headlines ” and “ Why is Content Strategy Important? ” But every post is AI-generated, right down to the authors’ profile pictures. It’s all the creation of content marketing agency Fractl, who says it’s a demonstration of the “massive implications” AI text generation has for the business of search engine optimization, or SEO.
“we feel it is an incredibly important topic with far too little discussion currently.” “Because [AI systems] enable content creation at essentially unlimited scale, and content that humans and search engines alike will have difficulty discerning [...] we feel it is an incredibly important topic with far too little discussion currently,” Fractl partner Kristin Tynski tells The Verge.
To write the blog posts, Fractl used an open source tool named Grover , made by the Allen Institute for Artificial Intelligence. Tynski says the company is not using AI to generate posts for clients, but that this doesn’t mean others won’t. “I think we will see what we have always seen,” she says. “Blackhats will use subversive tactics to gain a competitive advantage.” The history of SEO certainly supports this prediction. It’s always been a cat and mouse game, with unscrupulous players trying whatever methods they can to attract as many eyeballs as possible while gatekeepers like Google sort the wheat from the chaff.
As Tynski explains in a blog post of her own, past examples of this dynamic include the “article spinning” trend, which started 10 to 15 years ago. Article spinners use automated tools to rewrite existing content; finding and replacing words so that the reconstituted matter looked original. Google and other search engines responded with new filters and metrics to weed out these mad-lib blogs, but it was hardly an overnight fix.
cheap AI text generators could create a “tsunami” of spam and bad content AI text generation will make article spinning “look like child’s play,” writes Tynski, allowing for “a massive tsunami of computer-generated content across every niche imaginable.” Mike Blumenthal, an SEO consultant and expert, says these tools will certainly attract spammers, especially considering their ability to generate text on a massive scale. “The problem that AI-written content presents, at least for web search, is that it can potentially drive the cost of this content production way down,” Blumenthal tells The Verge.
And if the spammers’ aim is simply to generate traffic, then fake news articles could be perfect for this, too. Although we often worry about the political motivations of fake news merchants, most interviews with the people who create and share this context claim they do it for the ad revenue.
That doesn’t stop it being politically damaging.
Right now, spotting fake AI text is pretty easy The key question, then, is: can we reliably detect AI-generated text? Rowan Zellers of the Allen Institute for AI says the answer is a firm “yes,” at least for now. Zellers and his colleagues were responsible for creating Grover, the tool Fractl used for its fake blog posts, and were able to also engineer a system that can spot Grover-generated text with 92 percent accuracy.
“We’re a pretty long way away from AI being able to generate whole news articles that are undetectable,” Zellers tells The Verge.
“So right now, in my mind, is the perfect opportunity for researchers to study this problem, because it’s not totally dangerous.” Spotting fake AI text isn’t too hard, says Zellers, because it has a number of linguistic and grammatical tells. He gives the example of AI’s tendency to re-use certain phrases and nouns. “They repeat things ... because it’s safer to do that rather than inventing a new entity,” says Zellers. It’s like a child learning to speak; trotting out the same words and phrases over and over, without considering the diminishing returns.
However, as we’ve seen with visual deepfakes, just because we can build technology that spots this content, that doesn’t mean it’s not a danger.
Integrating detectors into the infrastructure of the internet is a huge task, and the scale of the online world means that even detectors with high accuracy levels will make a sizable number of mistakes.
Google did not respond to queries on this topic, including the question of whether or not it’s working on systems that can spot AI-generated text. (It’s a good bet that it is, though, considering Google engineers are at the cutting-edge of this field.) Instead, the company sent a boilerplate reply saying that it’s been fighting spam for decades, and always keeps up with the latest tactics.
We’re already turning away from search engines SEO expert Blumenthal agrees, and says Google has long proved it can react to “a changing technical landscape.” But, he also says a shift in how we find information online might also make AI spam less of a problem.
More and more web searches are made via proxies like Siri and Alexa, says Blumenthal, meaning gatekeepers like Google only have to generate “one (or two or three) great answers” rather than dozens of relevant links. Of course, this emphasis on the “one true answer” has its own problems , but it certainly minimizes the risk from high-volume spam.
The end-game of all this could be even more interesting though. AI-text generation is advancing in quality extremely quickly, and experts in the field think it could lead to some incredible breakthroughs. After all, if we can create a program that can read and generate text with human-level accuracy, it could gorge itself on the internet and become the ultimate AI assistant.
“It may be the case that in the next few years this tech gets so amazingly good, that AI-generated content actually provides near-human or even human-level value,” says Tynski. In which case, she says, referencing an Xkcd comic , it would be “problem solved.” Because if you’ve created an AI that can generate factually-correct text that’s indistinguishable from content written by humans, why bother with the humans at all? Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Universal Music sues AI company Anthropic for distributing song lyrics Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
179 | 2,020 | "The Ghost in the Machine – Emotionally Intelligent Conversational Agents and the Failure to Regulate ‘Deception by Design’ – SCRIPTed" | "https://script-ed.org/article/the-ghost-in-the-machine-emotionally-intelligent-conversational-agents-and-the-failure-to-regulate-deception-by-design" | "A Journal of Law, Technology & Society https://script-ed.org?p=3886 SCRIPTed A Journal of Law, Technology & Society Menu About SCRIPTed Editorial Board Submission Guidelines Authors Artists Book Reviewers Blog posts Copyright policy Books for Review Issues Current Issue All issues Blog Contact us All issues > Volume 17 > Issue 2 > The Ghost in the Machine – Emotionally Intelligent Conversational Agents and the Failure to Regulate ‘Deception by Design’ Volume 17 , Issue 2 , August 2020 The Ghost in the Machine – Emotionally Intelligent Conversational Agents and the Failure to Regulate ‘Deception by Design’ Pauline Kuss* and Ronald Leenes** Download PDF © 2020 Pauline Kuss and Ronald Leenes Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Abstract Google’s Duplex illustrates the great strides made in AI to provide synthetic agents the capabilities to intuitive and seemingly natural human-machine interaction, fostering a growing acceptance of AI systems as social actors. Following BJ Fogg’s captology framework, we analyse the persuasive and potentially manipulative power of emotionally intelligent conversational agents (EICAs). By definition, human-sounding conversational agents are ‘designed to deceive’. They do so on the basis of vast amounts of information about the individual they are interacting with. We argue that although the current data protection and privacy framework in the EU offers some protection against manipulative conversational agents, the real upcoming issues are not acknowledged in regulation yet.
Keywords Google Duplex; conversational agent; persuasion; manipulation; regulatory failure Cite as: Pauline Kuss and Ronald Leenes, "The Ghost in the Machine – Emotionally Intelligent Conversational Agents and the Failure to Regulate ‘Deception by Design’" (2020) 17:2 SCRIPTed 320 https://script-ed.org/?p=3886 DOI: 10.2966/scrip.170220.320 * LL.M./Analyst, hy GmbH, Berlin, Germany, paulinekuss@gmx.net.
This paper is based on Pauline Kuss, Deception by Design for the Goal of Social Gracefulness: Ethical and Legal Concerns of Humanlike Conversational Agents, Tilburg, 2019 ** Professor in Regulation by Technology, Tilburg Institute for Law, Technology, and Society, Tilburg, the Netherlands, r.e.leenes@tilburguniversity.edu 1 Introduction In May 2018, a crowd of software engineers cheered at Google’s I/O conference after the demonstration of Duplex, an intelligent voice agent that fits in your pocket sized smartphone and is able to make calls on behalf of its user in a deceivingly human-sounding voice. Intended to take care of cumbersome tasks such as the booking of appointments at hairdressers or restaurants, the previewed feature of Google Assistant convincingly mimics human behaviour by integrating speech disfluencies like ‘hmmm’ and ‘ums’ into its conversation – leading to a result applauded as a significant design achievement by some [1] and criticized as “Uncanny AI Tech” [2] by many others.
The development of conversational agents like Google’s Duplex imbeds artificial intelligence into systems that are designed to deceive humans about their synthetic nature. However, we seem to have moved beyond the Uncanny Valley and no longer feel uneasy by this close to human vocal behaviour. Very soon we could find ourselves in a world where discerning whether we are talking to a human or an intelligent system on the other end of the communication channel becomes challenging. In particular the potential combination with other, currently unrelated, developments in voice AI which allow for the realistic imitation of a person’s voice based on only a snippet of a recording, [3] suggests worrying scenarios of deliberate deception and fraud including cases of voice phishing or politically-motivated manipulation.
In an era in which the term ‘fake-news’ has become a household word, the hazardous potential of digital technology as a facilitator of distributing deceptive messages is nothing new. However, the possibility of deceivingly accurate voice imitation, its potential integration into regular communication channels and the possibly unavoidable power of emotional associations attached to the sound of a familiar voice, suggest yet another, efficiently scalable and – possibly most worryingly – highly personalisable tool for actors with malicious intentions. But even in cases of conversational AI which discloses its synthetic identity upfront, an ethical consideration of the manipulative potential embedded in interactive, trust-generating and seemingly human intelligent systems appears appropriate.
From a legal perspective, the concept of ‘manipulation’ is difficult to grasp – where do we draw the line between manipulative and merely persuasive interventions? [4] Manipulation involves the intentional misuse of another’s weaknesses – a skill which emotionally intelligent conversational agents (EICAs) can be expected to master with near perfection given their ability to access and process a vast amount of data and to adapt their behaviour accordingly.
The development of deceivingly human voice AI reflects a general trend towards increasingly seamless human-machine interaction. This is highly desirable from the perspective of technology developers because it supports convenience, thereby increasing users’ enjoyment of and willingness to engage with respective systems. The concealment of machine-operated interaction, however, necessarily leads to a growing disguise of the presence of intelligent systems in people’s surroundings. Additionally, cloud and fog computing accelerate a decoupling of devices’ outer appearance from their ability to record, store and process data as their real processing power is no longer contained in their enclosures.
[5] Duplex marks the beginning of a development that promises to seamlessly embed a growing number of intelligent systems in our physical surroundings and in our emotional and social spaces. What does it mean when the sphere of human interaction becomes increasingly interwoven with the input of intelligent systems – systems that appear much better equipped to convincingly represent interests than ‘normal’ human beings? Given conversational agents’ continuous processing of what their conversation partner is saying in order to facilitate adequate responses, how long will it take until such systems integrate in-depth analysis of how things are said into their response-engineering algorithm? Identifying personality traits of the interacting data subjects based on their choice of words, [6] or detecting a predisposition for psychosis [7] and Parkinson disease [8] based on non-verbal cues – where do we draw the line for what information intelligent conversational agents may derive from their counterpart’s voice? While the abilities of Google’s Duplex remain quite restricted at this point, the complexity of legal and ethical concerns related to humanlike conversational AI is evident already. We can expect the Duplex feature of Google Assistant to spread to Europe. The question then arises in how far such concerns are addressed by the current European legal frameworks for data protection and consumer protection. Given the power of lock-in effects, which might lock-in unfortunate, (privacy-harming) original design choices into subsequent versions or follow-up products of a new technology, [9] the moment to consider what conversational AI shall look like, is now.
Besides describing the causes of and concerns related to the manipulative potential of humanlike conversational agents, this paper assesses some of the legal concerns in view of current European legislation. A focus on data protection and privacy law is chosen, motivated by the observation that, first, the risk of manipulative systems naturally implies the possibility of an infringement of individuals’ decisional and intellectual privacy. Secondly, the extent of data processing involved directly affects the manipulative potential of a conversational agent: regulations on the permitted type of data processed, the employed processing techniques as well as on the required level of transparency and data subject control can thus be suggested as implicitly addressing the concern of manipulative systems. The legal analysis therefore considers the General Data Protection Regulation (EU) 2016/679 (GDPR) and the Privacy and Electronic Communications Directive 2002/58/EC (ePrivacy Directive) as well as the proposed ePrivacy Regulation replacing the latter.
While protection from manipulative systems might also be found in other legal fields such as contract and consumer protection regulations this requires information about specific operational settings – which is absent given the prospective nature of the developments sketched in this paper –, as well as a focus on one or more specific jurisdictions. Instead, the assessment of data protection and privacy law allows for a focus on those specific attributes of intelligent agents that form the basis of the particular manipulative potential of such systems: their ability to ‘know’ a lot about the interacting individual – be it through real-time data processing or accessibility to other sources of data and customer profiles – and their capacity to adjust their behaviour accordingly in a statistically optimised fashion.
This paper is organised as follows. First, in section two, we present the context of our analysis, conversational agents. Next, section three explores the persuasive and manipulative aspects of these agents. Section four provides an analysis of the use of (deceptive) conversational agents from the perspective of the General Data Protection Regulation (GDPR) and the e-Privacy framework. Section five concludes the paper with a call to action.
2 Conversational Agents The development of human-like machines capable of naturally conversing with people has been a long-standing goal for researchers in the field of human-computer interaction.
[10] Increasingly, conversational agents, described as “dialogue systems often endowed with ‘humanlike’ behaviour”, [11] emerge as common human-computer interfaces causing a “rise of conversation as platform” [12] as illustrated by intelligent voice assistants like Apple’s Siri and Microsoft’s Cortana.
Technology developers are keen on designing intelligent conversational agents that leave the user with an impression of merely a human interaction , optimizing the agents’ responses according to a counterpart’s emotional and mental state or personality for the sake of user acceptance.
Considering Google’s Duplex as an illustrative example, current developments in the field of humanlike conversational AI are driven by a combination of recurrent neural networks, automatic speech recognition technology and sophisticated text to speech engines which not only include speech disfluencies but also match the speed of their responses to the latency expectations of their conversation partner.
[13] The dynamic adaptation of speech latency to match the counterparty’s expectations – thereby designing a conversation that is perceived as natural not only on the level of voice-quality but also with respect to the responsive behaviour of the intelligent agent – illustrates the sophistication of possibilities available to technology developers intending to design convincingly human-sounding AI agents. While dynamic adaptation of speech latency is merely one example of the greater research field of emotional speech synthesis, [14] it shows how the behaviour of intelligent systems can be dynamically adjusted, optimized to personally match individual conversation partners. What remains is the question regarding the pursuit of which interests and goals the responses of such system are optimized.
Without intending to pose allegations, it should be considered that there is a fine line between convincing or persuading people (e.g. into believing they are talking to a human being) and nudging or manipulating people. Although technology-induced power imbalances are far from novel, the level of sophistication with which they might be implemented in the context of conversational agents deserves particular attention.
3 Conversational Agents and Manipulation Before providing an analysis of the specific characteristic of a deceivingly human voice and behaviour which endow EICAs with particularly powerful and thus potentially particularly concerning manipulative capacities, the persuasive and possibly manipulative nature of conversational agents must be explored.
3.1 Conversational agents as persuasive technology According to Fogg, computers can ‘persuade’ – that is change people’s behaviour or attitude – by appearing either as a tool , a medium or as a social actor.
[15] He claims that computers’ capacity to change people’s behaviour and attitudes in their function as a social actor essentially depends on individuals’ tendency to form relationships with technology. Supported by this human tendency, computers can exhibit persuasive effects through three distinct persuasive affordances when appearing in the role of a social actor: Establishment of social norms Invocation of social protocols Provision of social support and sanctioning For the context of conversational agents, in particular the second and third affordances appear of importance: conversational agents can leverage social protocols to influence user behaviour such as the invocation of politeness norms, turn taking or reciprocity through the intentional expression of respective social cues. Likewise, the conscious provision of social support or sanctioning in the form of praise or criticism – a frequently observed dynamic in human-human interactions – can be easily used by conversational agents to affect individuals’ conduct.
[16] Both of these persuasive affordances build on the human tendency to behave socially vis-à-vis computers, echoing the ‘Computers are Social Actors’ (CASA) paradigm developed by Reeves and Nass.
[17] They suggest that anthropomorphism is driven by mindless user behaviour, which can be intentionally triggered through the provision of respective contextual cues – most notably through the expression of human features and characteristics.
[18] It can therefore be assumed that intelligent systems appearing in the role of a social actor are more persuasive the more accurately they mimic human behaviour.
[19] Extending Fogg’s model we propose two additional persuasive affordances that computers can use to persuade: leveraging of situational or personal features and leveraging associations of existing relationships.
These capture, first, the power of data resources and processing capacities for fine-tuned personalisation and (real-time) adaptation of a system’s behaviour and, second, the particular ability to communicate through a deceivingly accurate human voice. We suggest that these two additional categories will be of increasing visibility and relevance in light of human-sounding conversational agents.
3.2 Conversational agents as intentional actors In order to define something as persuasive it is not enough that it simply influences human behaviour: although the Summer sun is a reason for people to put on sunscreen, we would be reluctant to talk about the sun as a persuasive actor. Fogg notes that since machines do not have intentions, a computer qualifies as a persuasive technology only when those who create, distribute, or adopt the technology do so with an intent to affect human attitudes or behaviours.
[20] , [21] The question to what extent EICAs have to be considered a persuasive technology therefore necessitates the identification of intentions involved – taking into account both the intentions embedded into the system by its creators as well as the interests of the user operating the system for a particular purpose.
Google promotes Duplex and its deceivingly human voice as offering a convenient tool that relieves customers from cumbersome phoning tasks while allowing for natural and intuitive human-machine interaction.
[22] Besides user satisfaction and a general strive for AI success stories, additional motives can be assumed. For instance Duplex might serve the company’s interest in attention-capturing technological novelty or the stimulation of user engagement. And, of course, Duplex will also generate and collect valuable conversation and customer data that can be leveraged for further improvements, subsequent products or premium price tags for advertisement deals. Users employing the calling assistant are likely to be motivated by the expectation of time-savings, convenience or the general enjoyment of playing with the newest feature of their phone.
[23] Also malicious and illegal user intentions are conceivable, including scenarios of intentional deception and voice phishing, a form of auditory identity fraud, with the ultimate goal of economic exploitation or political manipulation.
The concept of manipulation can be described as neighbouring the concept of persuasion on a Spectrum of Influence.
[24] Manipulation is slightly more controlling than persuasion albeit not as incontrovertibly controlling as coercion, which makes a precise definition of manipulation more complex and elusive. Anne Barnhill offers a definition of manipulation that is useful for our purposes: Manipulation is intentionally directly influencing someone’s beliefs, desires, or emotions such that she falls short of (the manipulator’s) ideals for belief, desire, or emotion in ways typically not in her self- interest or ways that are likely not to be her self-interest in the present context [25] This suggests a consequentialist perspective as it takes the outcome contrary to the self-interest of the manipulated individual as one defining element. Complementing this first theoretical notion of manipulation, she offers a second, more intuitive definition following the thoughts of Joel Rudinow [26] that further emphasizes this situational relevance through a focus on situational weaknesses: Manipulation is intentionally making someone succumb to weakness or a contextual weakness, or altering the situation to create a contextual weakness and then making her succumb to it.
[27] Given this definition of manipulation, we can now illustrate how intelligent conversational agents can be used to persuade or manipulate individuals through the affordances described by Fogg and extended by us (Table 1).
Persuasive Affordance Persuasion Example Manipulation Example Establishment of social norms Intent: Increase social acceptance of interacting with EICAs Intervention: Win users’ acceptance with rational arguments for the desirability of interacting with conversational agents (e.g. convenience) and the possibility to opt-out of interactions Priming of target’s (perceived) interest while ultimate choice remains with target Intervention: Simply establish AI agent as given without revealing its identity; make an opt-out impossible or difficult; make alternatives to the interaction tedious, time-consuming or costly Give targets no choice or artificially/unnecessarily increase the cost of the alternative to intended choice Proposed Extension for the Context of Conversational AI: Leveraging associations of existing relationships Intent: Trigger trust within a target by capitalizing on emotional associations of existing personal relationships Intervention: Reveal the synthetic nature of the conversational agent through an introduction as personal assistant of a close friend in order to achieve a target’s willingness to share their agenda for the purpose of finding a suitable date for a joint night out No pretence of own personality but identification as intelligent assistant and explicit reference to the social relationship involved in respective associations Intervention: Employment of a voice imitation algorithm to simulate the voice of a person (closely) known to the target in order to leverage respective person’s reputation, friendship or authority for malicious interests such as economic fraud or political manipulation Pretence of own personhood by the artificial agent; employing identity fraud through voice phishing to leverage the trust of existing personal relationships and social contexts for malicious purposes Table 1: Examples of persuasion and manipulation by conversational agents leveraging the persuasive affordances of technologies appearing in the social actor functionality.
[28] The identification of interests and thus intentionality embedded within EICAs supports their denomination as potentially manipulative technology. Evidently, an assessment of Duplex’s intentionality constitutes a challenging task, depending in its outcome on the particularities of future technical developments as well as on potential economic interdependencies between this and other Google products. Visible plurality of the interests involved suggests that the target population of the respective intentions might be equally multi-layered, including not only the direct user of the system but also the individual who will eventually interact with the EICA on the other end of the (phone) line, as well as potential misusers of the technology. For the sake of clarity, we will refer to respective individual as the passive recipient of a communication, describing the person interacting with the conversational agent without being the one actively initiating the human-machine interaction.
[29] Of note is that the recipient is the only actor unable to influence the intentionality attributable to the intelligent agent, as she is not involved in defining its endogenous or autogenous [30] intent. At the same time, the recipient is the target of both users’ and misusers’ interests and thus the subject of potentially related persuasive or manipulative intentions. Furthermore, compared to developers, users and misusers, the recipient is likely to be least knowledgeable about the system’s technical nature, presence and capacities, suggesting an imbalance of power and calling into question the autonomy and rationality of the recipient’s choice when agreeing to respective interaction – granted she is asked in the first place. The recipient therefore has to be regarded as the actor most in need of protection against the system’s manipulative potential.
3.3 The concerning power of persuasive conversational agents If we accept that conversational agents are to be regarded as persuasive technology, we can explore their powers and the concerns they raise if adopted in conversations between a machine (initiator) and a natural person (recipient), for instance through a robocall. This section argues that EICAs are particularly powerful tools of manipulation due to their particular ability to trigger anthropomorphic user behaviour and their capacity for conversational engineering resulting in a personalisation according to mind, emotion and context.
3.3.1 Anthropomorphism and user expectation Following the aforementioned idea that certain social cues can trigger mindless behaviour on the side of the human actor in human-machine interactions, the ability of EICAs to create an intuitive and deceptively accurate impression of everyday human-to-human interaction can be expected to support anthropomorphism and to trigger the expression of inappropriate social behaviour by concerned individuals towards the machine.
Elaborating on the concerns of intelligent systems imitating human behaviour in the commercial context, Kerr suggests that anthropomorphism is concerning from a consumer protection perspective, as people erroneously assume intelligent online assistants to be neutral or even customer-serving in their interests, overlooking the assistant’s likely economic partiality.
[31] [32] Kerr’s argumentation points towards the important link between designing deceptively accurate human-like AI, anthropomorphism, user expectation and consequential, potentially worrisome user behaviour. When picking up the phone, hearing a human voice on the other end of the line, people expect a social encounter between two human beings. Without a reason to challenge this assumption, they will implicitly expect their human-sounding conversation partner to also exhibit other human characteristics. They will thus not expect their counterpart to have access to a vast amount of data and processing power, enabling the same to sophisticatedly analyse the subtleties of conducted interaction and optimise its responses through statistical computations and profiling techniques.
[33] Not expecting the actual (processing) capacities of the other party, individuals are unable to reasonably judge the potential consequences of their behaviour in given circumstances – reflecting what Luger describes as a missing “grammar of interaction”.
[34] Individuals will thus not be given any reason to adequately adapt their own behaviour.
[35] While the inappropriate anthropomorphism of intelligent systems might appear only mildly worrisome to some, the potentially accompanying erosion of people’s agency to make informed, sovereign choices raises serious concerns regarding individuals’ autonomy, dignity and privacy. Respective concerns are particularly obvious in cases where an AI does not identify itself as an artificial agent, thereby intentionally deluding the expectation of the interacting person.
[36] Capitalizing on this human tendency to treat human what appears human , the design of interactive systems imitating human behaviour with deceiving accuracy appears to imply a concealment of the system’s mathematical capacities, underlying data resources and potentially involved stakeholder interest – be it intentionally or as an unintended side-effect.
It may be noted that this human tendency to interact socially with machines exhibiting human characteristics holds even in cases where the individual is well aware of the synthetic nature of their counterpart, as suggested by Weizenbaum’s findings with ELIZA.
[37] Moreover, respective discussion is nothing new: already in 1944 the Heider-Simmel illusion showcased a human willingness to attribute motives and character traits to inanimate objects as un-human as moving geometrical figures.
[38] Also the ethical issue of deception through autonomous agents has already been discussed by existing scholarship such as Schafer’s analysis of the use of autonomous agents for online police operations.
[39] However, what is new with EICAs addressed by this article is – besides their formerly unknown sophistication – the broad market reach of respective technology and the ubiquity of their employment enabled through cloud infrastructure. These developments merit the here presented discussion as they imply the decentralisation and uncontrolled scalability of arising concerns discussed in the following.
3.3.2 Power imbalances and conversational engineering The (intentional) concealment of the actual capacities of an intelligent agent, leading to a respective ignorance on the side of interacting individuals, threatens to introduce considerable power imbalances into the sphere of social interactions. Arguably, in most social encounters power imbalances always exist to some extent due to information asymmetries, resources inequality or motivational intransparency. However, respective concerns are multiplied exponentially with the introduction of socially engaging intelligent systems that vastly exceed their human counterparts in their capacity for data-driven communication design.
While human communicators are bound to learn from their own experience (or individual study), an artificial agent can hardly be seen as a single actor, but rather constitutes one instance of a bigger system that cumulatively gathers learning-relevant experiences, enabling each instance to feed on an abundance of data and models stored on its servers. Fogg describes several advantages of computers over humans with respect to their persuasive capacity, including computers’ persistence ; ability to store, access and manipulate great volumes of data ; scalability and ubiquity.
[40] Its access to a great amount of data, which can be leveraged as argument within as well as for the strategic optimisation of a persuasive agenda, grants conversational AI a significant advantage over humans in shaping an interaction and its outcome. Systems’ potential capacity of real-time profiling to support optimized adaptation of an agent’s behaviour or its fundamental characteristics raises a type of concern that might be referred to as conversational engineering.
The imbalance of power implied by (intransparent) conversational engineering appears morally worrisome as it favours intelligent conversational agents in their ability to steer an interaction for persuasive or even manipulative intentions while undermining persons’ capacity to accurately judge the dynamics of the social encounter they find themselves in. While similar imbalances and its manipulative consequences might already be visible in existing applications of data-based decision making or profiling techniques, [41] we propose that they are particular prominent in the context of deceptively human, interactive EICAs due to their outstanding social character and how embedded they can become into every-day social encounters.
3.3.3 Personalisation according to mind, emotion and context The idea of conversational engineering illustrates the ability of EICAs to personalize their behaviour with respect to their conversation partner, furthering the system’s persuasive power. At the point of writing, no details on the exact scope of the data processing activities involved in the Duplex system have been released by Google.
[42] The idea of an intelligent system which elaborately analyses your choice of words for potentially manipulative intentions or interprets your timbre and tone of voice for profiling purposes which go beyond the goal of presenting you with a pleasant interaction, might thus remain merely a hypothetical thought for now. However, a search for context relevant patents held by Google suggests that within Google work is done to develop intelligent systems capable of adapting their behaviour according to the personality and current emotional state of an interacting individual, as well as their contextual and environmental surrounding.
[43] Google is surely not the only one developing intelligent systems capable of adapting their behaviour to the mental and emotional state of the interacting individual. Amazon recently patented an updated version of its virtual assistant Alexa that would analyse users’ speech and other signals of emotion or illness, enabling the suggestions of activities suitable for a user’s emotional state and the proactive offer to purchase medicine.
[44] [45] Amazon’s recent purchase of PillPack, a US-wide operating online seller of prescription drugs, [46] offers one explanation for the patent’s focus on the medical market, illustrating the relevance of transparently assessing the web of interests that possibly affect the behaviour of intelligent assistants, as such systems are likely to be less objective than the general user might expect.
3.4 Dual Use and the Weaponization of Conversational Agents As for most technologies, intelligent systems have the potential for dual use and thus carry the risk of weaponization.
[47] While the use of intelligent calling agents in the context of armed conflict might appear as an unrealistic scenario at first sight, the threat of serious misuse of such systems in contexts such as political campaigning or electoral fraud is actually highly concerning. Considering the already discussed persuasive potential of human-like voice AI, emerging systems combining conversational abilities with (already existing) voice imitation algorithms [48] intensify such worries. How unlikely are scenarios of employing such system for mass callings – possibly using the voice of popular political figures – intended to influence political dynamics in a particular country? The potential risk of technology as a tool for (political) manipulation is surely not new arising only with the advent of intelligent conversational agents.
[49] And yet, intelligent conversational agents display two characteristics that suggest them as a particularly potent instrument for potential manipulation: first, as the calls are conducted automatically without the need for human intervention, communicating (manipulative) messages through conversational agents is highly scalable. Not even the precise wording of the intended conversation would have to be humanly designed. Secondly, while scalability might also be seen for the spread of digital video footage or nudge-intending social media content, the channel of a phone call gives conversational agents a much more personal character. A phone call is explicitly directed at one single person and constitutes a social interaction quite familiar to most people. Consequentially, the message conveyed can be highly individualized to optimize the impact of the intended nudge. Additionally, recipients might be less sceptical towards messages received through personal interaction, as the possibility of dangerously authentic fake-calls is less prominent within the public awareness compared to by now better-known examples of visual deep-fakes.
4 Existing Legal Framework Now that we have an understanding of the potential of emotionally intelligent conversational agents that produce increasingly natural conversation bringing to bear knowledge about persuasion and manipulation, connected to information about the state of mind of the recipient and their emotions, as well as information from the vast trove of the recipient’s onliƒe, we can explore what this entails from the perspective of the law, in particular, data protection and privacy regulation (in the EU). In this context, the data protection (GDPR) and e-Privacy frameworks are most prominent.
4.1 GDPR The General Data Protection Regulation 2016/679 (GDPR) regulates the processing of personal data which is defined as “[1] any information [2] relating to an [3] identified or identifiable [4] natural person”.
[50] Personal data is a very broad notion.
[51] The Art. 29 Data Protection Working Party notes that the term includes any information regardless of its nature, content or format.
[52] Acoustic information, including voice recordings are explicitly listed as personal data [53] and additionally referred to as an example of biometric data , which come with the particularity of providing both content about an individual as well as a link between the same and some piece of information.
[54] Voice recordings are thus to be regarded as identifiers of natural persons, implying fulfilment of the definitional elements [3] and [4] above. With respect to information relating to a natural person derived from voice recordings, the element of ‘identified or identifiable’ is satisfied when respective data can be linked to a natural person through any “means reasonably likely to be used […] by the controller or by another person”.
[55] The status of information as personal data is thus dynamic, depending on context and advances of re-identification technologies, [56] which suggests considering information derived from individuals’ voices as personal data until effective irreversible anonymisation can be assured. The use of respective information for the personalization of an agent’s behaviour suggests that a link between the data and an individual can be assumed.
[57] Also technical information such as smartphone identifiers, IP addresses or phone numbers are linked to the person addressed by the conversational agent, contributing to making this individual an identifiable person, [58] following the “standard of the reasonable likelihood of identification”.
[59] ‘Relating to’ a natural person [element 4], again, has a broad scope. Such relation can be either in content, purpose or result.
[60] Relating through ‘content’ is rather straightforward. It refers to information about a person, which in the current context would include personal phone numbers, but also personality traits or mental and emotional states of an individual should such information be derived through voice analysis. If the information collected through the conversation is used or likely to be used to “evaluate, treat in a certain way or influence the status or behaviour of an individual”, [61] it relates to this person per ‘purpose’. Data that relates to a person as ‘result’ if “their use is likely to have an impact on a certain person’s rights and interests”.
[62] Such a result is present irrespective of the gravity of the impact – the different treatment of one person from another suffices.
[63] EICAs adjust their behaviour according to individual interactions and perceived environments, what Hildebrandt refers to as “data-driven agency”.
[64] In such a context, “any information can relate to a person by reason of purpose, and all information relates to a person by reason of impact.” [65] It follows that whatever information is processed by a EICAs for the purpose or with the result of (accidentally [66] ) treating one individual different than another has to be considered personal data triggering protection under the GDPR.
[67] The data obtained from the recipient by the conversational agent, either through voice, or additional sources, can only be processed if the controller has a legitimate ground for such processing (Art. 6 GDPR). Considering that no contractual relationship exists between the individual interacting with the EICAs and the agent’s provider, that the latter has no legal obligation to process the conversational data and neither does a public interest exist in respective processing, paragraphs 6(a) data subject consent and 6(f) necessity for the purpose of a controller’s or third party’s legitimate interest appear the only grounds reasonably available to legitimize the processing of personal data in the context of EICAs under Art. 6 GDPR. Importantly, Art. 6(f) requires a balancing test of the interests involved, clarifying that the legitimate interest of a controller or third party constitutes no legitimizing ground for processing where it is overridden by the interests or fundamental rights of the data subject concerned.
Recital 47 elaborates on the concept of ‘legitimate interests’, noting that “reasonable expectations of data subjects based on their relationship with the controller” should be taken into account, as legitimate interests might for example exist in cases where a client or service relationship is present between the data subject and the controller.
[68] The processing of personal data occurring through the employment of a EICAs by individuals for the purpose of placing a restaurant reservation or, reversely, the use of such system by a restaurant for the answering of customer-calls can therefore be expected to find justification under Art. 6(f) granted the balancing test is passed. Recital 47 furthermore states that “the processing of personal data for direct marketing purposes may be regarded as carried out for a legitimate interest”, suggesting that the operation of EICAs for unsolicited marketing calls may equally be legitimized under the exception of legitimate interests if these are adequately balanced against the interests, rights and freedoms of the receiving individual.
The Art 29 WP holds that the requirement constitutes no “straightforward balancing test” but instead “requires full consideration of a number of factors”, [69] including safeguards and measures in place such as easy-to-use opt-out tools.
[70] The WP emphasizes the threshold of ‘necessity’ required by the concerned article and clarifies that in order to satisfy Art. 6(f) a ‘legitimate interest’ must be (a) lawful, (b) sufficiently specific and (c) not speculative.
[71] The scale of data collection, lack of transparency about the logic underlying the processing, sophistication of profiling and tracking techniques employed as well as a resulting de facto (price) discrimination are factors that could negate Art. 6(f) as a valid basis of lawful processing.
[72] According to the Working Party, the potentially negative impact on a data subject has to be considered in a broad sense, encompassing also emotional distress such as irritation or fear as well as chilling effects resulting from the impression of continuous monitoring.
[73] Validity of Art. 6(f) in the context of EICAs thus depends on a case-to-case assessment of involved interests, including the consideration of inter alia the nature of concerned data, the relationship between the data controller and data subject as well as the expectations of the latter with respect to data confidentiality. The processing of personal data for the purpose of operating a conversational agent that expresses (financially) discriminatory, deceptive or outright manipulative behaviour or which in any other way has a considerable negative impact on the interacting individual clearly cannot be justified on the ground of legitimate interest.
[74] As we have outlined above, voice analysis can offer highly sensitive insights relating for example to an individual’s emotional or mental health. This would render the data processed by EICAs under ‘special categories of personal data’ in Art. 9 GDPR. Art. 9 excludes legitimate interest of the controller as a valid processing ground. Data subject consent on the other hand is a valid ground in Art. 9(2)(a).
Suggesting an even stricter interpretation, it could be argued that also with respect to less sophisticated conversational agents, which do not involve the processing of special category data on first sight, data subject consent should be regarded the only valid basis for lawful processing. Considering that the content of a communication can, potentially, always include sensitive information concerning one of the conversation partners or another individual, the processing of special category data by systems which are restricted to the processing of conversational content only – a processing that is necessary to enable an agent to generate adequate responses – cannot be ruled out entirely. Moreover, also without an analysis of someone’s voice, the choice of words, which are inevitably processed by any type of conversational agent, can reveal sensitive insights concerning one’s emotional state, cognitive complexity or personality.
[75] A precautionary approach would therefore proclaim the necessity to justify any processing involving conversational data by conversational agents under Art. 6/9 GDPR only on the basis of consent.
[76] 4.1.1 Fairness of intelligent systems The General Data Protection Regulation 2016/679 (GDPR) regulates the processing of personal data through a framework of principles set out in Art. 5, [77] listing of particular importance in respect to previously identified challenges of manipulative systems the principles of fairness and transparency.
The only available ground for a lawful processing of personal data in the context of conversational agents seems to be consent. Challenged by the principle of fairness, the legitimizing power vested in user consent stands in clear contrast to data subjects’ limited ability to understand the complex technology behind intelligent systems – especially when such complexity is hidden behind the veil of apparently human-like familiarity. This holds particularly true for intelligent systems that process not only the verbal content of a conversation but also the voice features of interacting individuals. As the general data subjects’ knowledge about the revealing nature of voice analysis can be expected to be marginal at most, the GDPR – in order to honour the principles of fairness – should be read as mandating comprehensive explanations aimed at supporting data subject’s understanding of the nature and potential consequences of such processing. Moreover, (mis)using the insights derived from such voice analysis for the purpose of designing more persuasive – a.k.a.
manipulative – systems appears to violate the principle of fairness, raising the question of where to draw the line between ‘making a user experience more intuitive and pleasant’ and ‘designing a system that pushes all the right buttons to trigger users’ sympathy and (inappropriate) trust’. Likewise, EICAs that fail to disclose their synthetic nature at the beginning of an interaction, thereby misusing their ability to authentically mimic human behaviour for the intended deception of interacting individuals, violate the principle of fairness.
4.1.2 Transparency of AI behaviour The importance of disclosing a EICA’s synthetic nature illustrates the association between the principles of fairness and transparency , and their respective relevance in the context of manipulative systems: a lack of transparency concerning the synthetic nature of the calling voice, the data and processing capacities available to or the interests of the same, result in an unfair imbalance of power that greatly disadvantages the called individual who finds itself an easy target for the potentially opaque intentions of the calling AI. One obvious difficulty arising in this context is the challenge of determining precisely where persuasion ends and manipulation begins. Another difficulty arises in respect to the detection and evidencing of manipulative behaviour of a conversational agent: if done well, individuals targeted by a manipulative system are likely not to notice the manipulation – let alone in cases where they are not even aware of the fact that they are interacting with an AI rather than an actual human being at the other end of the line. Respect for the principles of fairness and transparency is thus fundamental and a clarification of their exact meaning and related requirements in the context of human-sounding voice AI would be essential.
4.2 Privacy law In their “Typology of Privacy”, Koops et al. describe privacy as a complex “set of related concepts that together constitute privacy” [78] and identify types of privacy, including privacy of relations , [79] privacy of person [80] and privacy of personal data.
According to the authors privacy can imply both a freedom from , as well as a freedom of something.
As a freedom of , privacy’s close association with the concept of ‘autonomy’ is apparent.
[81] While privacy as a negative right appears more directly connected to data protection concerns, the understanding of privacy as a positive freedom highlights the strong link between privacy protection and the issues of manipulation and deception.
Referring to the eight primary types of privacy suggested by Koops et al., the context of EICAs most visibly gives rise to concerns with respect to individuals’ communicational, intellectual, [82] decisional [83] and associational [84] privacy.
[85] While the GDPR’s broad protective scope appears to already safeguard communicational and informational privacy, it seems important that privacy law complements the respective legislation in particular through provisions emphasizing the importance of transparent disclosure of intelligent agents so as to ensure the protection of individuals’ intellectual, decisional and associational privacy.
4.3 ePrivacy Directive In contrast to the GDPR, the ePrivacy Directive [86] is not restricted to the protection of personal data itself but covers confidentiality of communication more broadly.
Of particular relevance in our context is Art. 13 of the ePrivacy Directive, which introduces the concept of “automated calling and communication systems without human intervention (automatic calling machines)” to refer to marketing calls “made by an automated dialling system that plays a recorded message”.
[87] While the technology of EICAs as discussed here did not exist at the time of the Directive’s writing, its similarity with ‘automatic calling machines’ proposes applicability of Art. 13 by analogy. Similar to the unsolicited call by an automatic calling machine, the individual responding to the call of a EICA is likely not to have requested the interaction with the machine.
[88] Relevance of Art. 13 for the context of conversational agents seems to depend on the provision’s underlying intention: is the article meant merely as a protection from the nuisance of unrequested mass-calls or does it aim to safeguard individuals when interacting with automated communication systems more generally? Recital 40 of the Directive describes the provision as a safeguard against the intrusion of privacy caused by highly scalable automated calling machines. Considering the connotation of ‘intrusion’, it appears valid to suggest an analogy between the purpose of Art. 13 and the tort of trespassing.
[89] Among many operational purposes, the employment of EICAs for automated marketing calls is indeed conceivable, suggesting unsolicited communication as an additional concern arising with autonomously operating calling agents, in the context of which Art. 13 ePrivacy Directive would be clearly applicable. However, it can be debated whether the intended protective scope of the provision also covers scenarios similar to those described, in which the recipient’s interest in the call would not to be challenged if the caller was a human being. Clearly, in such case it is not the occurrence of the call itself, but rather the processing of personal data by and the persuasive potential of EICAs that might give rise to privacy concerns.
The “Typology of Privacy” [90] illustrates that privacy interests relate not only to spatial privacy – the type of privacy protected by the action of trespass – but that they also, inter alia, include individuals’ decisional and intellectual privacy. While it appears that Art. 13 was written with the protection of the former in mind, one can argue that the purpose of protecting individuals from an intrusion of their privacy should be interpreted more broadly as to acknowledge the concept of privacy in its complexity. Following such reasoning, we suggest to read Art. 13 as to safeguard individuals more generally when interacting with automated communication systems, considering the potential infringement of individuals’ communicational, decisional and intellectual privacy through the data processing involved in and the persuasive potential of such systems. Irrespective of a recipient’s general interest in the call, the recipient does have an interest in being protected – if not against the occurrence of the communication itself, then still against the potentially privacy-intrusive implications of interacting with an intelligent data-processing system.
Under Art. 13 user consent is required to allow for respective calls, implying that even if conversational AI did not involve the processing of communication data or personal information, the interacting individual would have to give prior agreement to a call they themselves did not initiate. However, since the article lists a “purpose of direct marketing” as explicit attribute of the automated calling systems covered by its application, it declares itself inapplicable for conversational agents employed in a non-marketing context. Besides posing the requirement of target consent, Art. 13(4) ePrivacy Directive explicitly prohibits “in any event (…) practice[s] which disguise or conceal the identity of the sender on whose behalf the communication is made”. While this provision appears to offer a solution to the identified need to demand the transparent disclosure of intelligent systems, again its application is limited to practices with “the purpose of direct marketing”.
4.4 The ePrivacy Regulation While a first proposal text has been published in January 2017, work on the Regulation’s draft continues at the point of writing, leaving the most recent proposal and comments published by the Council in September 2018 as the basis for the current analysis.
The ePrivacy Regulation [91] appears to fill the regulatory gap caused by the Directive’s restricted definition of ‘automated calling machines’ by explicitly defining “automated calling and communication systems” (Art. 4(h)), leaving aside the necessary context of marketing purposes criticized previously. The respective definition refers to “systems capable of automatically initiating calls to one or more recipients in accordance with instructions set for that system, and transmitting sounds which are not life speech” [92] – a definition that seems to cover emotionally intelligent conversational agents. While paragraph (3)(f) of the same article lists such system as one of multiple technologies that can be used for the purpose of “direct marketing communications”, the ePrivacy Regulation achieves a disjunction of this purpose and the definition of automated communication systems that improves the respective provision of the Directive. However, a stand-alone section elaborating on the risks, rights and requirements related to automated communication systems remains missing in the current Regulation draft. In fact, concerns such as the need to obtain recipients’ consent prior to the interaction with an EICA or the requirement of identity disclosure are only raised with regard to unsolicited and direct marketing communications in Art. 16 of the Regulation. A consideration of scenarios in which automated calling systems could be used for purposes other than marketing, such as the scheduling of personal appointments, which nevertheless implies risks for the rights and freedoms of the interacting individuals due to the necessarily involved processing of their (conversational) data, thus remains absent. Similarly, while the Regulation demands revealing the identity of the natural or legal person behind the automated marketing communications system thereby suggesting a promising contribution to the protection of individuals’ privacy interest, it lacks a general requirement to disclose the synthetic nature of deceivingly human-sounding EICAs in non-marketing contexts.
5 Conclusion While human-sounding, emotionally intelligent conversational agents (EICAs) constitute a persuasive technology by nature – simply because they inherently persuade interacting users to treat them according to social protocols through their human-imitating behaviour – their designation as manipulative technology depends on a case-to-case assessment of the particular intentions embedded, their potential consequences as well as the pursued way of achieving the same. The degree of control exerted and the extent to which targets’ capacity for autonomous decision-making is intentionally undermined should be considered markers to identify the presence of manipulative rather than merely persuasive interventions.
Convincing human AI agents are likely subject to anthropomorphism, resulting in mindless social behaviour of the interacting individuals who might easily misjudge the computing capacities and thus the overall power of the friendly voice on the other end of the phone line. With the general trend towards embedded and more seamless computing systems, computers’ presence and the potential consequences thereof become increasingly intransparent for individuals who nevertheless find themselves subjected to the techno-regulatory impact of such systems. Besides ethical concerns related to the affront to individual freedom, the danger of identity fraud and the justifiability of manipulation, respective opaqueness of the systems also deprives individuals of the informational basis needed to make sovereign choices with respect to the protection of their privacy and personal data. In a way, the strength of EICAs is also their greatest weakness: they are purposefully designed to appear human-like, to conceal their synthetic nature and computing capacities. Demanding transparency is thus antipodal to the engineers’ efforts and the technological achievement of human-like AI, implying a conflict between regulatory and economic interests – a conflict in which the protection of fundamental rights should be watched particularly carefully.
We have argued that existing European legislation in principle does provide protection to data subjects regarding the processing of their personal data by intelligent conversational agents. While respective provisions are certainly of relevance in the context of potentially manipulative technologies, the particular concerns arising with humanoid EICAs such as inappropriate, anthropomorphism-triggered self-disclosure or people’s growing inability to comprehend the synthetic nature and capacities of the computing systems surrounding them, are not addressed.
The identified limitation illustrates the currently changing nature of AI-powered (communication) products and suggests a lacking awareness thereof on the side of the legislator. EICAs are not experienced by consenting users nor are they restricted to the operation through actors with commercial interests. They can also be employed by individual people for their personal interests, resulting in a shift of implied (privacy) concerns to individuals that has not been considered by current privacy legislation and raising questions concerning the desirable allocation of liabilities and responsibilities. This is not a matter of only data protection and privacy law, but also one of contract, consumer protection and liability law.
By definition, the setting of social interactions and relationships constitutes a core interest of the societies we live in – urging us to continuously consider the values we embed into those technologies that more and more casually enter our lives in the form of social actors. Further discussion should thus be opened on the extent to which we wish such integration to take place: besides pressuring for transparency and recipients’ consent, should we regulate the sophistication of and the data that may be used for personalized human-machine interaction? Do we wish to prohibit systems that exploit individuals’ (emotional) weaknesses and where do we draw the line between the design of a convenient user-experience and persons’ intentional deception? The trend towards more and more seamless human-machine interactions promises that those instances in which we consciously interact with, in which we are consciously aware of the presence of respective systems and able to prevent leaving behind a data trace by simply being , are likely to decline rapidly in the future.
[1] Joshua Montgomery, “Congratulations to Google Duplex! What’s Next?” (2018), available at https://mycroft.ai/blog/congrats-on-google-duplex-whats-next/ (accessed 12 September 2018).
[2] Mark Bergen, “Google Grapples With ‘Horrifying’ Reaction to Uncanny AI Tech” ( Bloomberg , 10 May 2018), available at https://www.bloomberg.com/news/articles/2018-05-10/google-grapples-with-horrifying-reaction-to-uncanny-ai-tech (accessed 12 September 2018).
[3] See for example: ‘Lyrebird AI’ part of ‘Descript’, https://descript.com/ [4] Cass R. Sunstein, “Fifty Shades of Manipulation” (2015) 1(3-4) Journal of Behavioral Marketing 213-244.
[5] Flavio Bonomi et al., “Fog Computing and Its Role in the Internet of Things” [2012] Proceedings of the first edition of the MCC workshop on Mobile Cloud Computing 13.
[6] See generally James W. Pennebaker and Anna Graybeal, “Patterns of Natural Language Use: Disclosure, Personality, and Social Integration” (2001) 10 Current Directions in Psychological Science 90-93.
[7] Gillinder Bedi et al., “Automated Analysis of Free Speech Predicts Psychosis Onset in High-Risk Youths” (2015) 1 npj Schizophrenia 1-7.
[8] Athanasios Tsanas et al., “Novel Speech Signal Processing Algorithms for High-Accuracy Classification of Parkinson’s Disease” (2012) 59 IEEE Transactions on Biomedical Engineering 1264-1271.
[9] Woodrow Hartzog, Privacy’s Blueprint (Harvard University Press, 2018).
[10] Yaniv Leviathan and Matias Yossi, “Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone” (2018), available at https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html (accessed 12 September 2018).
[11] Giorgio Vassallo et al., “Phrase Coherence in Conceptual Spaces for Conversational Agents” in PCY Sheu et al. (eds.), Semantic Computing (Wiley, 2010).
[12] Ewa Luger and Gilad Rosner, “Considering the Privacy Design Issues Arising from Conversation as Platform” in R.E. Leenes et al. (eds.), Data Protection and Privacy – The age of Intelligent Machines (Oxford: Hart Publishing, 2018), pp. 193-212.
[13] Leviathan and Yossi ( supra , n. 10).
[14] See generally Marc Schröder, “Emotional Speech Synthesis: A Review” Seventh European Conference on Speech Communication and Technology (2001).
[15] B.J. Fogg, “Persuasive Computers: Perspectives and Research Directions” (1998) CHI 226-232.
[16] B.J. Fogg, Gregory Cuellar, and David Danielson, Motivating, Influencing, And Persuading Users: An Introduction to Captology (CRC Press, 2009), p. 140.
[17] Byron Reeves and Clifford Nass, The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places (Cambridge: CUP, 1996).
[18] Clifford Nass and Youngme Moon, “Machines and Mindlessness: Social Responses to Computers” (2000) 56 Journal of Social Issues 81-103.
[19] Reservations to this might be implied by the uncanny valley effect.
[20] Fogg ( supra n. 15), p. 226.
[21] Given the development of intelligent systems since the writing of this sentence in 1998, one could wonder whether self-learning machines might not one day be regarded as actors holding intentions themselves. Indeed, considering a scenario in which an intelligent phone assistant, after being informed that the originally desired timeslot was unavailable, requests whether an appointment could be possible anytime later that day. Does such request still fall within the user-dictated intention of booking an appointment or does it exceed it, making the question an expressed intention of the system itself? Obviously, the exact phrasing of the user’s instruction – did she ask the system to book ‘an appointment at 5pm’ or did she additionally mentioned ‘or if that’s unavailable, any time later would also be fine’ – would already impact the outcome of such analysis. It seems illogical though, that (if intentionality is considered a question of ability ) the same machine could in some instances be regarded as intentional actor while being disregarded of such intentionality in other situations. This paper remains conservative with regards to personal interests of machines and understands the intentionality of a technology as equivalent to the intentions of its creators and employing users – reflecting what Fogg calls a computer’s endogenous and autogenous intent respectively Fogg ( supra n. 15), p. 226.
[22] Leviathan and Yossi ( supra n. 10).
[23] Once Duplex-like systems escape the current limits of only operating in the niche contexts of booking restaurant tables or hairdresser appointments, further user intentions can be expected such as handing over uncomfortable social interactions to the intelligent assistant. Similarly, users could pretend to be their personal assistant by introducing themselves as such, intending to escape the full responsibility of their statements in a given conversation.
[24] Ruth Faden and Tom Beauchamp, A History and Theory of Informed Consent (OUP, 1986).
[25] Anne Barnhill, “You’re Too Smart to Be Manipulated By This Paper” (2010), available at https://vdocuments.mx/1-youre-too-smart-to-be-manipulated-by-this-paper-anne-barnhill-.html (accessed 21 July 2020), p. 22.
[26] Joel Rudinow, “Manipulation” (1978) 88 Ethics 338-347.
[27] Barnhill ( supra n. 25), p. 24.
[28] Due to space constraints we have only included two of the five affordances in the table.
[29] It is conceivable that an individual calls a restaurant that employs a conversational agent at their phone line. While in such scenario it would have been the individual who practically initiated the interaction, we nevertheless consider her the passive recipient as she intended to communicate with the human receptionist at restaurant rather than consciously choosing to involve an AI, resulting in a human-machine interaction.
[30] Fogg ( supra n. 15), p. 226.
[31] Ian R. Kerr and Marcus Bornfreund, “Buddy Bots: How Turing’s Fast Friends Are Undermining Consumer Privacy” (2005) 14 Presence: Teleoperators and Virtual Environments 647-655.
[32] Kerr also raises the point that the intentional design of intelligent systems aimed at triggering anthropomorphic behaviour appears intuitively repulsive from a moral point of view as it deludes individuals’ into ‘friendships’ with artificial entities and the illusion of a mutually shared experience.
[33] See for a similar account in the context of cochlear and retinal implants, Bert-Jaap Koops and Ronald Leenes, “Cheating with implants: Implications of the hidden information advantage of bionic ears and eyes” in M.N. Gasson, E. Kosta, and D.M. Bowman (eds.), Human ICT Implants: Technical, Legal and Ethical Considerations (TMC Asser, 2012) p. 113-134.
[34] Luger ( supra n. 12).
[35] It is left to the reader to think of remarks that might slip your tongue carelessly in a casual conversation, which you might re-consider twice if you knew your counterpart to be a data-infused profiling machine.
[36] However, Weizenbaum’s findings with ELIZA suggest that the human tendency to interact socially with machines exhibiting human characteristics holds even in cases where the individual is well aware of the synthetic nature of their counterpart (Joseph Weizenbaum, Computer Power and Human Reason (WH Freeman and Company, 1976)).
[37] Ibid.
[38] Fritz Heider and Marianne Simmel, “An Experimental Study of Apparent Behaviour” (1944) 57(2) American Journal of Psychology 243-259.
[39] Burkhard Schafer, “The taming of the sleuth – problems and potential of autonomous agents in crime investigation and prosecuting” (2006) 20 International Review of Law, Computers & Technology 63-76.
[40] Fogg ( supra n. 15).
[41] E.g. the dynamic pricing schemes of airlines which, based on their model and several data points known about an individual, personalize the ticket prices offered to respective customers with the intention of maximizing the overall profit by balancing premium prices against the risk of being left with empty airplane seats.
[42] In existing publications Google states to use context parameters, conversation histories “and more” (Leviathan and Yossi ( supra n. 10). which appears to be a conveniently broad notion neither including nor excluding any type of data really.
[43] For instance William Zancho et al., “Determination of Emotional and Physiological States of a Recipient of a Communication”, available at https://patentimages.storage.googleapis.com/e6/d6/c8/04858db5fb697b/US7874983.pdf (accessed 21 July 2020); Bryan Horling et al., “Forming Chatbot Output Based on User State” https://patents.google.com/patent/US9947319B1/en (accessed 21 July 2020).
[44] Huafeng Jin and Shuo Wang, “Voice-Based Determination of Physical and Emotional Characteristics of Users”, available at https://patents.google.com/patent/US10096319B1/en (accessed 21 July 2020).
[45] Highly interesting in terms of its dubiousness is also the included patent claim for targeting advertisements to match the detected mood of a user, offering advertisers the possibility to pay for emotionally targeted placement of their products – a promising marketing strategy given the significant correlation between impulsive buying and customer features such as personality profiles (Bas Verplanken and Astrid Herabadi, “Individual Differences in Impulse Buying Tendency: Feeling and No Thinking” (2001) 15 European Journal of Personality S71) or current emotional state (Peter Weinberg and Wolfgang Gottwald, “Impulsive Consumer Buying as a Result of Emotions” (1982) 10 Journal of Business Research 43-57).
[46] Margi Murphy, “Amazon Sends Pharmacy Stocks Tunbling after Snapping up Online Chemist” ( The Telegraph , 2018), available at https://www.telegraph.co.uk/technology/2018/06/28/amazon-sends-pharmacy-stocks-tumbling-snapping-online-chemist/ (accessed 19 October 2018).
[47] Goncalo Carrico, “The EU and Artificial Intelligence: A Human-Centered Perspective” (2018) 17 European View 29-36.
[48] See for example: ‘Lyrebird’ ( supra n. 3).
[49] For illustration of existing possibilities one may consider the supposed engagement of Cambridge Analytica in the 2016 US presidential election or popularly discussed examples of visual deep-fakes involving well-known politicians.
[50] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (hereinafter ‘GDPR’) Art. 1(1).
[51] Article 29 Data Protection Working Party, “Opinion 4/2007 on the Concept of Personal Data” (WP136, 2007).
[52] Ibid., p. 6.
[53] Ibid., p. 7.
[54] Ibid.
, p. 8.
[55] GDPR, Recital 26.
[56] Nadezhda Purtova, “The Law of Everything. Broad Concept of Personal Data and Future of EU Data Protection Law” (2018) 10 Law, Innovation and Technology 40-81, p. 47.
[57] Article 29 Data Protection Working Party, “Opinion 05/2014 on Anonymisation Techniques” (WP216, 2014), p. 7.
[58] European Commission, “What Is Personal Data?”, available at https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-personal-data_en (accessed 21 July 2020).
[59] Purtova ( supra n. 56), p. 47.
[60] Art. 29 Working Party ( supra n. 51), p. 10.
[61] Ibid.
, p. 10.
[62] Ibid.
, p. 11.
[63] Ibid.
, p. 11.
[64] Mireille Hildebrandt, “Law as Information in the Era of Data-Driven Agency” (2016) 79 The Modern Law Review 1-30.
[65] Purtova ( supra n. 56), p. 55.
[66] Ibid.
, p. 56.
[67] One could challenge whether the customization of agents’ voice, choice of words or pace of speech constitutes sufficiently different treatment. However, the Art. 29 Working Party explicitly established a very low threshold of impact, implying that such customization are to be regarded as ‘relating to’ an individual by purpose and/or impact.
[68] GDPR, Recital 47.
[69] Article 29 Data Protection Working Party, “Opinion 06/2014 on the Notion of Legitimate Interests of the Data Controller under Article 7 of Directive 95/46/EC” (WP217, 2014), p. 3.
[70] Ibid.
, p. 31.
[71] Ibid.
, p. 25.
[72] Ibid.
, p. 32.
[73] Ibid.
, p. 32.
[74] The purpose(s) for which data are being processed (art. 5(1)(b) GDPR) by the conversational agent are a significant issue to be discussed as well, but due to space constraints we leave this for another occasion.
[75] Y.R. Tausczik and James W. Pennebaker, “The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods” (2010) 29 Journal of Language and Social Psychology 24-54.
[76] If processing is to be legitimised by consent, this raises a whole range of issues, because the consent must be informed, freely given, unambiguous etc. We leave these for another occasion.
[77] A much more extensive treatment of the applicability of the GDPR and its requirements can be found in Pauline Kuss, Deception by Design for the Goal of Social Gracefulness: Ethical and Legal Concerns of Humanlike Conversational Agents (Tilburg, 2019).
[78] Bert-Jaap Koops et al., “A Typology of Privacy” (2017) 38 University of Pennsylvania Journal of International Law 483-575, p. 488.
[79] Encompassing the protection of the establishment of social relationships and communication.
[80] Encompassing the protection of thought and personal decision-making.
[81] Koops et al., supra n. 78, p. 514.
[82] The intentional design of systems meant to deceive people with respect to their synthetic nature challenges the privacy of persons’ opinion and believes encompassed in this privacy type.
[83] Decisional privacy appears generally challenged by persuasive and manipulative technologies and is equally at risk in the context of intelligent systems which conceal their synthetic nature as such undermine individuals’ capacity to make self-serving privacy choices.
[84] Describing individuals’ freedom to choose whom to interact with, associational privacy is challenged in cases where adequate disclosure of the synthetic nature of a conversational agent is missing as this undermines individuals’ informed choice concerning the interaction they decide to engage in.
[85] It could be argued that human-sounding conversational agents also threaten to compromise spatial privacy, as individuals’ capacity to execute control over the actors they admit to the private space of their personal phone line would be undermined in cases where they are unable to know of the synthetic nature and thus of the computing capacities of the voice at the other end.
[86] European Parliament; Council of the European Union, “Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 Concerning the Processing of Personal Data and the Protection of Privacy in the Electronic Communications Sector (Directive on Privacy and Electronic Communications)” (2002) L 201 Official Journal of the European Communities 37.
[87] Information Commissioner’s Office, Guide to the Privacy and Electronic Communications Regulations (2018), p. 16.
[88] On the other hand, in cases where a conversational agent is employed to place a reservation with a restaurant, it could be argued that the latter did request such call implicitly by stating an interest in being called for the purpose of reservations when offering a phone number to prospective customers. Also with respect to private communications, such as the scheduling of a personal meeting between two friends, the interacting individual might not have chosen to converse with a machine and yet, having an interest in seeing his friend, she can be expected to welcome the call.
[89] Such analogy was made by the California Supreme Court in the context of unsolicited e-mails in Intel Corp. v. Hamidi , reasoning that the act of connecting oneself to the internet or buying a telephone cannot be considered an invitation to receive masses of unwanted e-mails and phone calls. See Intel Corp. v. Hamidi 30 Cal. 4th 1342 (2003).
[90] Koops et al., supra n. 78.
[91] European Commission, “Proposal for a Regulation of the European Parliament and of the Council Concerning the Respect for Private Life and the Protection of Personal Data in Electronic Communications and Repealing Directive 2002/58/EC (Regulation on Privacy and Electronic Communication”, available at https://ec.europa.eu/digital-single-market/en/news/proposal-regulation-privacy-and-electronic-communications (accessed 21 July 2020).
[92] Ibid.
, art. 4(3)(h).
The Ghost in the Machine – Emotionally Intelligent Conversational Agents and the Failure to Regulate ‘Deception by Design’ August 6, 2020 No Comments ← Editorial introduction Biomedical Data Identifiability in Canada and the European Union: From Risk Qualification to Risk Quantification? → Leave a Reply Cancel reply Your email address will not be published.
Required fields are marked * Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam.
Learn how your comment data is processed.
Home About SCRIPTed Submission Guidelines Blog Books for Review Sitemap Contact us Recent blog posts Timed influence: The future of Modern (Family) life and the law September 10, 2021 Predicting Innovation: Why Facebook/WhatsApp Merger Flunked April 1, 2021 Georgia vs. Public.Resource.org: The Morning After June 15, 2020 Online harms and Caroline’s Law – what’s the direction for the law reform? April 13, 2020 SCRIPTed is turning 15! January 14, 2019 Issues About SCRIPTed Submission Guidelines Blog Sitemap Contact us {{FOOTNOTENUM}}.
" |
180 | 2,020 | "The Concept of ‘Information’: An Invisible Problem in the GDPR – SCRIPTed" | "https://script-ed.org/article/the-concept-of-information-an-invisible-problem-in-the-gdpr" | "A Journal of Law, Technology & Society https://script-ed.org?p=3885 SCRIPTed A Journal of Law, Technology & Society Menu About SCRIPTed Editorial Board Submission Guidelines Authors Artists Book Reviewers Blog posts Copyright policy Books for Review Issues Current Issue All issues Blog Contact us All issues > Volume 17 > Issue 2 > The Concept of ‘Information’: An Invisible Problem in the GDPR Volume 17 , Issue 2 , August 2020 The Concept of ‘Information’: An Invisible Problem in the GDPR Dara Hallinan* and Raphaël Gellert** Download PDF © 2020 Dara Hallinan and Raphaël Gellert Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Abstract Information is a central concept in data protection law. Yet, there is no clear definition of the concept in law – in legal text or jurisprudence. Nor has there been extensive scholarly consideration of the concept. This lack of attention belies a concept which is complex, multifaceted and functionally problematic in the GDPR. This paper takes an in-depth look at the concept of information in the GDPR and offers up three theses: (i) the concept of information plays two different roles in the GPDR – as an applicability criterion and as an object of regulation; (ii) the substantive boundaries of the concepts populating these two roles differ; and (iii) these differences are significant for the efficacy of the GDPR as an instrument of law.
Keywords Data protection; GDPR; information theory; genetic data; artificial intelligence; machine learning Cite as: Dara Hallinan and Raphaël Gellert, "The Concept of ‘Information’: An Invisible Problem in the GDPR" (2020) 17:2 SCRIPTed 269 https://script-ed.org/?p=3885 DOI: 10.2966/scrip.170220.269 * Senior researcher, IGR, FIZ Karlsruhe – Leibniz-Institut für Informationsinfrastruktur GmbH, Karlsruhe, Germany, dara.hallinan@fiz-karlsruhe.de ** Assistant Professor, Faculty of Law, Radboud University, Nijmegen, the Netherlands, r.gellert@jur.ru.nl 1 Introduction Information is a central concept in data protection law under the General Data Protection Regulation (GDPR).
[1] This should be no surprise. Information is, after all the substance, the collection, exchange and manipulation of which, provides the rationale for the existence of data protection law. For a demonstration of the significance of the concept in the GDPR, one needs to look no further than the fact that the concept constitutes a key criterion in the concept of personal data – outlined in art. 4(1) – and therefore plays a defining role in determining whether the law, and all substantive provisions therein, applies at all.
Yet, there is no clear definition of information in European data protection law. There is no definition provided in the text of the GDPR or in prior European Union (EU) data protection law. Nor is there a structured and comprehensive definition provided in relevant jurisprudence. There has been certain scholarly attention paid to the concept in data protection law notably in the excellent work of Bygrave.
[2] This work, however, has limitations. The work does not provide a structured approach for the analysis of the functions or boundaries of the concept. Nor does it extensively differentiate between conceptualisations of the concept.
We believe the lack of legal and scholarly attention belies the reality of a concept which is complex, multifaceted and, eventually, functionally problematic in the GDPR. From this perspective, this paper offers an in-depth look at the concept of information in the GDPR and argues three, cumulative, theses: There are two different roles played by the concept of information in the GPDR: information as an applicability criterion; and information as an object of regulation.
The substantive boundaries of the concepts of information populating these two roles differ – i.e. these are two different concepts, not relating to the same substantive phenomenon.
The substantive differences between these two concepts of information are significant for the efficacy of the GDPR as an instrument of information law.
[3] The paper begins by sketching the two roles played by the concept of information in the GDPR (section 2). The paper then advances a conceptual framework – built on three axes – for identifying the substantive boundaries of conceptualisations of information in each of these two roles (section 3). Using this framework, the paper then maps the substantive boundaries of the two concepts: first, the concept of information as an applicability criterion (sections 4-8); second, the concept of information as an object of regulation (sections 9-12). Building on this mapping, the paper highlights the substantive differences between the two concepts (section 13). The paper then shows how divergences between the two concepts are problematic for the GDPR as an instrument of information law (sections 14-16). Finally, the paper considers the legal options available for addressing these problems (section 17).
We begin by sketching our first thesis: There are two different roles played by the concept of information in the GPDR.
2 Two Different Roles for the Concept of Information in the GDPR Considerations of the role of the concept of information in data protection law – including in the GDPR – have tended to explicitly identify only one role: information as an applicability criterion (role 1).
[4] This is understandable, as the concept explicitly appears in legal text only in this role. We, however, would suggest that the concept also plays a second role: information as an object of regulation (role 2). Below, we sketch the function of each of these roles in the GDPR.
In the first role, as an applicability criterion, the concept of information functions to define whether the GDPR can apply rationae materiae.
Art. 2 of the GDPR outlines the law’s scope. Art. 2(1) elaborates a key applicability criterion – the concept of personal data: “This Regulation applies to the processing of personal data wholly or partly by automated means.” Art. 4(1) provides a definition for personal data in which information is listed as an explicit substantive criterion for the existence of personal data: “‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’)” (emphasis added). Thus, the presence or absence of information determines which substances can, or cannot, be personal data and to which the GDPR and its substantive provisions can apply.
[5] In its second role, as an object of regulation, the concept of information functions as a substance around which the substantive principles of the GDPR were designed, and in relation to which these principles will act – much as, for example, medical devices are the object of the EU medical devices law.
[6] This concept is implicit in the GDPR. Specifically, this concept must have taken some form in the mind of the legislator for the legislator to have engaged in the choice and design of substantive provisions. For example, in art. 15 of the GDPR – concerning data subjects’ rights of access their personal data – the data subject has the right to obtain, from the data controller, a copy of their personal data. To provide this copy, the controller must perform a set of actions regarding the substance of information. That such a provision appears in the GDPR means the legislator must have had some image of the characteristics of the substance of information, and as to how a controller might engage with it.
Superficially, it would make sense that the concepts of information occupying these two roles would converge on the same substantive phenomenon i.e. the concepts would have the same substantive boundaries. A closer look, however, reveals reason to think otherwise. As will be discussed in the next section, there are numerous concepts of information and the boundaries of these concepts can differ significantly. These differences often result from the different functions the concept of information plays in the context in which it is employed. In this regard, there is a clear difference between the function of the concept of information in each of its two roles in the GDPR. As an applicability criterion, the concept performs a normative function defining whether a substance qualifies for protection at all.
[7] As an object of regulation, the concept plays a descriptive function describing a substance with a specific set of properties around which to legislate, and subsequently, act.
The previous section sketched our first thesis that the concept of information plays two roles in the GDPR. We highlighted the following two roles: Information as an applicability criterion (role 1).
Information as an object of regulation (role 2).
This section also suggested there is reason to think the substantive boundaries of the concepts occupying the two roles may not converge on the same substantive phenomenon. Against this background, we thus move to elaborate our second thesis: The substantive boundaries of the concepts of information populating these two roles differ.
The first step in demonstrating this thesis is to elaborate a general framework for the structured mapping and differentiation of concepts of information.
3 A Framework for Mapping the Two Concepts of Information in the GDPR A comprehensive mapping of the boundaries of a legal concept ideally follows within a structured conceptual framework outlining the range of possible dimensions of the concept. Identifying such a framework would normally follow a consideration of relevant law and jurisprudence. In relation to the concept of information in the GDPR, however, there are insufficient legal resources to identify such a framework.
[8] To overcome this obstacle, we construct a structured conceptual framework via a consideration of the phenomenology of information, not from a jurisprudential, but from a general perspective. At the highest level of abstraction, information is a resource for the resolution of uncertainty. In this regard, myriad disciplines have adopted concepts of information. In doing so, however, each discipline – depending on the purpose of the concept in the discipline – has defined the concept differently. We identify a set of three key axes differentiating concepts of information across disciplines.
[9] Taken together, these axes provide a structured conceptual framework within which to map concepts of information in the GDPR.
Axis 1 : the degree to which information must be semantic relate to meaning in the world. Not all conceptualisations of information require information to convey meaning about the world. Mathematical concepts of information, for example, focus on the probabilistic relationships between systems regardless of semantic content. The typical example is Shannon information , which concerns the statistical properties of systems and the correlation between the states of two systems – regardless of semantic content of states.
[10] In turn, not all semantic information corresponds to meaning in the same way. Most importantly, information may differ in the degree of structuring required to convey meaning to an agent. Information may be deliberately structured to convey meaning frictionlessly – for example a factual sentence – or information may be less unstructured, requiring the addition of more, or less, complex interpretative frameworks to extract meaning.
[11] Axis 2: the degree to which information must be stored and transferred within, and across, specific media. Certain conceptualisations of information focus on the requirement for information storage and transfer within specific media. Certain definitions in computer science, for example, may insist on the necessity for information storage and transfer in computer media or at least in human-created media.
[12] There are other definitions of information, however, which cast the net wider. Certain philosophical definitions point to the feasibility of naturally occurring information.
[13] This information is stored in naturally occurring physical phenomena. The typical example are the rings located in tree trunks. These rings exist independently of human storage media – and even of human observation. Yet, the rings correlate with the age of the tree and therefore may be considered in terms of information.
[14] Axis 3: the degree to which information must relate to human cognition. Certain conceptualisations of information focus on some degree of human cognition in the creation or perception of information. These definitions tend to correspond, in terms of use, with those requiring information to be stored or transferred on specific media. For example, the International Organization for Standardization – in defining information technology vocabulary – suggests information to be: “knowledge which reduces or removes uncertainty.” [15] Knowledge requires cognition. Other definitions are human cognition ambivalent. For example, biological conceptualisations of information regard the function of DNA – both in terms of inheritance and translation between genotype and phenotype – in terms of information.
[16] DNA information is created, and operates independently of human cognition.
We now move to map the concept of information in each of its two roles in the GDPR. The mapping process for the concept in each role involves two steps: Provide an overview of the background to the concept of information to offer perspective and orientation to the mapping process.
Map the concept of information against each of the three axes of differentiation outlined in this section, above.
Both steps are applied first to the concept of information as an applicability criterion (role 1) and then to the concept of information as an object of regulation (role 2).
4 Providing an Overview of the Background of Information as an Applicability Criterion (Role 1) The concept of information as an applicability criterion has a long and stable history in European data protection law. This history stretches back to the earliest international instruments of data protection law with European relevance. The concept was evident as an applicability criterion in the Organisation for Economic Co-operation and Development Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (1980) as well as in the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981) – the concept also appears in the updated versions of these instruments.
[17] The concept was then retained, in fundamentally unaltered form, in both Directive 95/46 – the Data Protection Directive and the forerunner to the GDPR – and in the GDPR.
[18] As discussed above – in section 2 – arts. 2(1) and 4(1) recognise the concept of information as one criterion, amongst a set of applicability criteria, all of which must be fulfilled for the GDPR to apply.
[19] The criterion of information, however, is conceptually distinct from other art. 2(1) and 4(1) criteria. The criterion applies to a basic class of substances to which the GDPR can apply regardless of context – i.e. a substance either is, or is not, information, regardless of subsequent elements of context. All other art. 2(1) and 4(1) criteria are then context-dependent. The applicability of other art. 4(1) criteria defining the concept of personal data – “relating to an identified or identifiable natural person” – are contingent on the presence of a contextually defined link between information and a specific individual. The applicability of the art. 2(1) criteria of “processing…wholly or partly by automated means” are contingent on a set of actions being done to information.
Given this conceptual specificity, the substantive content of the concept can be considered independently from other art. 2(1) and 4(1) applicability criteria. This possibility has been explicitly recognised in jurisprudence. The Article 29 Working Party, for example, in their Opinion on the Concept of Personal Data, devote a specific section to the consideration of the concept of information apart from other art. 4(1) criteria.
[20] Equally, the CJEU, when considering the applicability of the concept of personal data in the Nowak case, considered the concept of information as an applicability criterion separately from other art. 2(1) applicability criteria.
[21] Indeed, recognising the independence of other art. 2(1) and 4(1) applicability criteria also has a long scholarly tradition. Consider, for example, the independent scholarly analyses of the art. 2 concepts of “related to” and “identifiability.” [22] The concept has always been intended to be understood and interpreted considering its function in EU data protection law. The base rationale of EU data protection law, as outlined in art. 1(1) of Directive 95/46 – substantively unchanged in the GDPR – was to: “protect the fundamental rights and freedoms of natural persons.” The aim of data protection, broadly put, is thus to provide protection whenever individual rights are threatened in the information society. Accordingly, since its first use in EU data protection law, the concept was intended to be interpreted broadly and flexibly to ensure data protection law applied whenever its base rationale was fulfilled. In this regard, in the travaux preparatoires for Directive 95/46 the Commission of the European Communities explicitly recognised the need for a broad, flexible definition of personal data – and therefore of information as one of its constituent criteria: “ ’Personal data’. As in Convention 108, a broad definition is adopted in order to cover all information which may be linked to an Individual.” [23] The need for a flexible and broad approach to the interpretation of the concept of information as an applicability criterion has been reaffirmed in subsequent jurisprudence. The Court of Justice of the European Union (CJEU), for example, in the recent case of Nowak , held: “The use of the expression ‘any information’ in the definition of the concept of ‘personal data’, within art. 2(a) of Directive 95/46, reflects the aim of the EU legislature to assign a wide scope to that concept…potentially encompass[ing] all kinds of information…provided that it ‘relates’ to the data subject.” [24] In turn, the Article 29 Working Party, in their Opinion on the Concept of Personal Data, observed: “The term ‘any information’ contained in the Directive clearly signals the willingness of the legislator to design a broad concept of personal data. This wording calls for a wide interpretation.” [25] Against this background, we now move to map the substantial boundaries of the concept of information as an applicability criterion against each of the three differentiating axes of the structured conceptual framework – outlined in section 3. We consider the substantive boundaries of the concept, along each axis, from two perspectives: In terms of teleology – in light of the function of the concept in relation to the basic rationale of the GDPR.
In terms of whether there are further refinements of the concept identifiable in jurisprudence.
5 Mapping the Concept of Information as an Applicability Criterion in Terms of the Relationship between Information and Meaning (Role 1, Axis 1) From a teleological perspective, the concept of information as an applicability criterion can relate only to semantic information. The purpose of data protection law under the GDPR is to protect individuals in relation to concerns around the use of their information in social contexts – by bureaucracies and by economic actors. Such social concerns arise only regarding semantic information. Social concerns arise only concerning power relations created by other actors knowing – or potentially knowing – something about an individual with social relevance. In this regard, non-semantic concepts of information are – if not prima facie excluded – largely meaningless. As Bygrave generally observes: “Information usually denotes a form of semantic content in law…. Law is primarily concerned with regulating human relations; therein, the production and exchange of meaning play a key role.” [26] In this regard, from a teleological perspective, the concept should encompass all semantic information regardless of the degree to which interpretation is still required to produce meaning to an agent. The degree of structuring of information in terms of meaning is not definitive of the existence, or degree, of risks to individuals’ rights and freedoms pertaining to information processing. Accordingly, the concept covers unstructured information which requires further interpretation to produce meaning to an agent, all the way up to clearly structured facts. This was demonstrated in the European Court of Human Rights’ (ECtHR) Marper case. In this case, the Court recognised the risks to rights and freedoms relating to the processing of unstructured genomic information – the raw genomic code. The Court recognised such processing as: “interfering with the right to respect for the private lives of the individuals concerned.” [27] Jurisprudence does little to further delimit the forms of semantic information covered by the concept of information as an applicability criterion. In fact, jurisprudence has only hinted at limitations on the range of semantic information covered by the concept in one case. This case concerned whether opinions and inferences – extrapolations about individuals from other available information – qualify as the subject of data protection law. This doubt emerged after the 2014 CJEU Y.S. and M. and S cases.
In the case, the Advocate General – in an opinion followed by the Court – concluded: “only information relating to facts about an individual can be personal data.” [28] The suggestion that the concept only relates to facts, however, was expunged in the subsequent 2017 CJEU Nowak case. In this case, the CJEU explicitly clarified: “all kinds of information…also subjective, in the form of opinions and assessments [are personal data].” [29] 6 Mapping the Concept of Information as an Applicability Criterion in Terms of the Relationship between Information and Media (Role 1, Axis 2) From a teleological perspective, the concept of information as an applicability criterion is ambivalent as to the medium of information storage or transfer. In terms of purpose, the concept aims to encompass all semantic information relating to an individual, the processing of which might constitute a risk for that individual. As the teleology of the concept relates to the semantic content of information, the media of storage and transfer are incidental. Accordingly, from this perspective, there is no limitation on the media which may be encompassed by the concept. The concept can encompass information stored and processed in computer-based information processing systems, information stored and processed in other artificial man-made systems and even information stored in naturally occurring media – for example DNA stored in a biological sample.
[30] The ambivalence of the concept to the media of storage and transfer is generally affirmed in jurisprudence. There is only limited CJEU case law on the matter. Yet, where the issue has been discussed, the Court has always recognised the concept extends to cover the information storage and transfer media in question. In the recent CJEU cases of Ryneš and Buivids, for example, the Court highlighted the concept encompasses information stored and processed in sound and image form.
[31] More extensive consideration comes from the Article 29 Working Party, in their Opinion on the Concept of Personal Data. In the Opinion, the Working Party made the following statement: “Considering the format or the medium on which…information is contained, the concept of personal data includes information available in whatever form, be it alphabetical, numerical, graphical, photographical or acoustic, for example. It includes information kept on paper, as well as information stored in a computer memory by means of binary code, or on a videotape, for instance.” [32] Despite general affirmation in jurisprudence, however, doubt has been raised in one specific case.
This case concerns whether information stored in biological form, in DNA in human biological samples, can fall within the concept of information as an applicability criterion under the GDPR. Doubt emerges on the back of the same Article 29 Working Party Opinion discussed in the preceding paragraph. In this regard, the Working Party stated: “Human tissue samples (like a blood sample) are themselves sources out of which [information is] extracted, but they are not [information] themselves.” [33] This statement seems conclusive. Yet, the Article 29 Working Party position is not supported by clear substantive argumentation. In fact, a deeper investigation reveals strong evidence that information stored in biological form, in DNA in human biological samples, should be regarded as falling under the concept of information as an applicability criterion in the GDPR. As this issue is seldom-discussed, the next section will outline the argumentation supporting the position in more detail.
7 Mapping the Concept of Information as an Applicability Criterion in Terms of the Relationship between Information and Media: The case of information in DNA The argumentation rests on three pillars: first, the teleological legitimacy of the position – touched on above, now elaborated in detail; second, the legal-technical legitimacy of the position; and third, the jurisprudential support for the position.
In the first instance, there is a strong case for the teleological legitimacy of the position. There are many situations in which biological samples are collected, stored and processed for the genomic data they contain – for example biobanking. In these cases, storage and transfer of information in biological form – in DNA – is practically equivalent to the storage and transfer of sequenced genomic data. As a result, the processing of biological samples engages an equivalent set of rights to the processing of sequenced genomic data. As Bygrave observes, in such contexts: “it is increasingly difficult, in practice, to distinguish between data/information and their biological carriers…there is frequently an intimate link between biological samples and the information they generate.” [34] Thus, if the concept of information as an applicability criterion should be interpreted broadly to apply to all types of information whenever individuals’ rights in information are at risk, and – as discussed in section 5 in relation to the Marper case – such risks are engaged by the processing of individuals’ genomic information, then the concept should surely also apply to information stored in DNA in biological samples.
In turn, there are no clear legal-technical obstructions which can be raised against the position. Two forms of legal-technical argument against the position have been put forward. These, however, are both flawed. First, an argument has been put forward that biological samples – and therefore the DNA contained therein, cannot technically qualify as information at all. As Nys put it: “data are representations of reality, whereas human biological materials are real themselves.” [35] Contrary to Nys’ assertion, however, the dominant characterisation of DNA in biological samples is as information. DNA is conceptualised as information in popular understanding, as well as in the genetic sciences.
[36] Indeed, such is the proximity of the comparison, DNA has even been put forward as an alternative to computer-based information storage.
[37] If DNA cannot be information, should an extensive file on an individual stored in the medium of DNA not qualify as information either? Second, an argument has been put forward that the concept of information as an applicability criterion in EU data protection law may have been built around a concept of information drawn from informatics, and that such a discipline-specific concept cannot support the inclusion of biological samples.
[38] There is, however, no evidence that such a discipline-specific concept of information was intended as a template for the concept in the GDPR, or in any prior instrument of EU data protection law – in either the legal texts themselves or in the travaux préparatoires.
[39] Even if such a discipline specific concept had been used as a template, DNA in biological form could still be conceived of as information. In Zins’ work on the concept of information in informatics, for example, several definitions of the concept of information in informatics are identifiable which encompass DNA.
[40] The most authoritative definition of information in informatics, offered in ISO 2382-1, can also encompass DNA.
[41] Finally, the position has growing jurisprudential support. In this regard, we would highlight the existence of three decisions before the ECtHR in which DNA was explicitly recognised in terms of personal data – and therefore in terms of information. The most well-known of these cases is the Marper case – already discussed above. In this case, the Court explicitly recognised that: “cellular samples [as carriers of DNA], constitute personal data.” [42] There are, however, two other, more recent, cases, in which the Court has reiterated this position. In both Gaughran and Trajkovski and Chipovski , the Court stated: “The Court notes that…DNA material is personal data.” [43] In principle, jurisprudence from the ECtHR – as a Court capable of making binding decisions in relation to Member States – should be regarded as having greater legal significance than competing claims in an Article 29 Working Party Opinion.
[44] It may be argued that these assertions may not reflect the Court’s considered position on the definition of information as a constituent criterion of personal data and should thus be taken with caution. This argument pivots on the fact that the Court’s statements in each case were brief and not supported by substantive argumentation.
[45] There are two reasons, however, that this argument cannot be accepted. First, the statements in the latter two cases are key to the Court’s subsequent argumentation. In both cases, the statements provide the base justification for the finding of an interference with the applicants’ right to private life. It seems highly unlikely the Court would build its legal reasoning around an unconsidered position. Second, the statements concern a concept – personal data – with an extensive history in Council of Europe law and ECtHR jurisprudence.
[46] Recall the concept already appeared as an applicability criterion in the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981) – a concept on which the concept of information as an applicability criterion in the GDPR is based.
[47] It seems highly unlikely the Court failed to recognise the history or significance of the concept, or acted carelessly in relation to the concept, when making its comments.
8 Mapping the Concept of Information as an Applicability Criterion in Terms of the Relationship between Information and Human Cognition (Role 1, Axis 3) From a teleological perspective, the concept of information as an applicability criterion engages human cognition in an indirect way. Human cognition need not play a direct role in the creation or perception of information for information to attain social significance – and thereby pose a risk in terms of individuals’ rights. For example, the sequencing and digital storage and transfer of an individual’s genome could happen automatically, without any human scrutiny or interrogation of the pertinent information. Yet, none would argue the processing of a sequenced genome poses no risk to individuals’ rights.
[48] From a teleological perspective, however, human cognition does need to play an indirect role in setting the information processing context – programming computer systems, for example – and in setting the processing agenda. It is impossible to imagine a situation in which semantic information could obtain social significance, and thus pose a risk to rights, without human cognition playing some role in setting the processing context.
The distinctions outlined in the paragraph above are supported in jurisprudence. There has been little jurisprudence specifically dealing with the relationship between human cognition and the concept of information as an applicability criterion. The matter has, however, received certain indirect consideration. In this consideration, jurisprudence has provided two significant clarifications. First, jurisprudence has generally clarified that the concept is ambivalent as to whether human cognition has played an active role in the creation or perception of information. In the CJEU Digital Rights Ireland case, for example, the Court recognised that systems which automatically store and retain information – independent of human cognition – constitute systems which process personal data. The Court thereby confirms that information processed in such systems can fall within the concept of information as an applicability criterion.
[49] Second, jurisprudence has clarified that the concept of information as an applicability criterion is not only ambivalent as to whether human cognition has played a role in the creation and perception of information processed, but is also ambivalent as to whether human cognition has played a role in determining the semantic content of information processed. In their Opinion on Online Behavioural Advertising, for example, the Article 29 Working Party stated: “the information collected in the context of behavioural advertising [constitute personal data – and therefore information as an applicability criterion].” [50] Online behavioural advertising exemplifies a context in which artificial intelligence and machine learning processes produce novel information about individuals without human cognitive involvement.
[51] The previous four sections mapped the substantive boundaries of the concept of information as an applicability criterion (role 1) in the GDPR. With this mapping complete, we now move on to perform the same process in relation to the concept of information as an object of regulation (role 2) in the GDPR.
9 Providing an Overview of the Background of Information as an Object of Regulation (Role 2) The concept of information as an object of regulation in the GDPR was never explicitly recognised or elaborated by the legislator in the legislative process. There are thus no primary sources to consult to provide a background to the concept. However, the concept is implicit in the substantive principles in the GDPR and is reflected in the assumptions these embody. A look at the range and modalities of these provisions thus provides the basic material from which the concept can be mapped.
[52] The GDPR consists of three types of substantive provision relating directly to the handling and manipulation of information. First: legitimate processing provisions. All personal data processing must be legitimated under one of the grounds outlined in art. 6 – in relation to regular, non-sensitive personal data – or art. 9 – in relation to sensitive personal data.
[53] In both cases, these legitimate grounds can, conceptually, be split into two groups: consent; and public interest justifications.
[54] Second: data controller obligations. In principle, in all cases of personal data processing, the data controller must adhere to a set of obligations – centrally outlined in art. 5 – including, for example: the obligation to maintain personal data accurately; and the obligation to treat data confidentially.
[55] Finally: data subject rights. In all cases of data processing, the data subject retains, in principle, certain rights over their personal data including, for example: the right to withdraw consent; and the right to access personal data.
With few exceptions – notably the right to data portability in art. 20, the data protection impact assessment obligation in art. 35 and data breach notification obligations in arts. 33 and 34 – the substantive provisions outlined in the GDPR are not novel.
[56] Most provisions were already present in some form in Directive 95/46. Most provisions present in Directive 95/46 were, in turn, inherited from provisions present in earlier EU Member States’ data protection law and/or other international data protection instruments with European relevance. Indeed, as González Fuster observes, the core data controller obligations can be traced back to the first two international instruments with European relevance which emerged in the early 1980s: the Organisation for Economic Cooperation and Development’s Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (1980); and the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981).
[57] Each of the substantive provisions in the GDPR was designed to be flexible. Flexibility by design is necessary as a result of the omnibus nature of the GDPR and the need for further specification of provisions to account for sectoral processing differences.
[58] Flexibility is also necessary in order for provisions to adapt to changing information processing technologies, the changing social contexts in which these technologies are used and the changing risks with which they are associated.
[59] To provide more concrete interpretations of provisions, as necessary according to context, Data Protection Authorities (DPAs) – at both the EU (the EDPB) and Member State level (national DPAs) – are provided with broad interpretative and adaptive powers.
[60] Flexibility of provisions has, however, significant limits. Applicable provisions cannot simply be disapplied. Nor can provisions be interpreted such that they disproportionately disapply other provisions; conflict with their own core purpose; conflict with the core aims of data protection law generally; or disproportionately impact other rights or legitimate interests engaged by data processing.
[61] Against this background, we now move to map the concept of information as an object of regulation against each of the three differentiating axes – outlined in section 3. As discussed above, this mapping cannot rely on primary sources. As an alternative, in relation to each axis, we consider which assumptions about the characteristics of information to be stored and manipulated must be present for the GDPR’s substantive provisions to make sense.
[62] 10 Mapping the Concept of Information as an Object of Regulation in Terms of the Relationship between Information and Meaning (Role 2, Axis 1) In the first instance, the concept of information as an object of regulation relates only to semantic information. Substantive provisions in the GDPR aim to describe the modalities of controller action in relation to the processing of individuals’ personal data which may result in risks for those individuals. Such control mechanisms only make sense in relation to semantic information. Concepts of information where semantic content is not central, are thus irrelevant. Algorithmic information, for example – which defines the informational content of an object in terms of the bits of the smallest programme capable of calculating that object – is anathema to the concept of information as an object of regulation.
[63] Bygrave’s observation thus remains relevant: “Law is primarily concerned with regulating human relations; therein, the production and exchange of meaning play a key role.” [64] We can, however, identify one further boundary criterion. The concept of information as an object of regulation is limited to information of specific semantic content: highly structured information in the form of social facts. The effective function of multiple substantive provisions in the GDPR depends on the information being processed being in the form of social facts. The assumption is evident, in particular, in relation to provisions aimed at ensuring the transparency of data processing to the data subject – for example, consent provisions in art. 6(1)(a) and 9(2)(a), information obligations in arts. 13 and 14 and access rights in art. 15.
[65] These provisions work on the basis of a one-off communication model, mandating the provision of a range of types of information about a processing operation to be provided to a data subject, such that the data subject is put in the position to understand the scope and consequences of processing.
[66] Information provided should be accurate and relevant at the moment of information provision and remain accurate and relevant over the duration of processing.
Such a one-off communication model, however, does not necessarily function in relation to unstructured information – which requires further interpretation and structuring to produce socially relevant facts about a data subject. Two logical problems emerge. First, if the socially relevant factual content of information only surfaces via interpretation – during processing – this content only surfaces after the communication of information about the processing to the data subject. How can the data subject appreciate the consequences of processing, if they cannot be informed of the socially relevant factual content of the information about them which will eventually be processed? [67] Second, the socially relevant facts which can be extracted from the analysis of unstructured information may change over time – as, for example, interpretative approaches advance. How can the data subject appreciate the consequences of processing, if the range of relevant facts which might be extracted from their information is liable to change? We recognise a counter-argument might be put forward to the above position: many of the types of information required to be communicated under the GDPR’s transparency provisions, which are relevant to allowing data subjects to understand the scope and consequences of processing, are unrelated to the degree to which information processed is already in the form of social facts. For example, transparency provisions contain a general obligation to communicate information concerning the purposes of the processing to the data subject – see, for example, arts. 13(1)(c), 14(1)(c) and 15(1)(a). Information on the purposes of processing is indeed vital for the data subject to understand the scope and consequences of processing. It is also true that information on the purposes of processing is technically independent of the degree to which processed information requires interpretation to produce social facts.
This counter-argument is superficially persuasive. The counter-argument, however, fails to recognise that: the factual content of information processed has, from the perspective of the consequences and risks of processing, a significant impact on how all other relevant information about processing is understood and evaluated. Consider, for example, the case of information concerning the purposes of processing about online behavioural advertising. The evaluation of the consequences and risks of such processing for a data subject’s life will vary depending on the factual content of personal data being processed. Evaluation of consequence will differ, for example, depending on whether an advertiser processes information on a subject’s shoe size, or whether they also process information on a subject’s sexuality.
[68] Thus, the mere provision of information on the purposes of processing will not necessarily be sufficient to allow the data subject to understand the consequences and risks of processing.
11 Mapping the Concept of Information as an Object of Regulation in Terms of the Relationship between Information and Media (Role 2, Axis 2) The concept of information as an object of regulation is limited to media which facilitate the easy and cost-effective reproduction and communication of information. In practice, this reduces the range of storage and transfer media that the concept encompasses artificial man-made media designed for easy reproduction and transfer of information – for example paper and digital media. This boundary criterion is an underlying assumption behind the effective function of several substantive provisions in the GDPR. Most significant amongst these provisions are access rights elaborated in art. 15 – particularly data subject rights to obtain a copy of personal data – and data portability rights outlined in art. 20 – in relation both to the right to obtain a copy of one’s own personal data, and the right to have personal data transferred to another controller.
Arts. 15 and 20 function by permitting the data subject to easily and cheaply obtain, or have transferred, a copy of their personal data. The provisions constitute formal safeguards allowing data subjects transparency in relation to, and control over, the processing of their information, whilst not imposing prohibitive costs or absurd modalities of action on data controllers.
[69] Art. 15 requires that: “The controller shall provide a copy of the personal data undergoing processing.” Art. 20 states: “The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format.” The approach in these articles makes sense for artificial man-made data storage media designed for easy and cheap storage and transfer of information. Information stored and transferred in such media will tend to be accessible to data subjects and thus can facilitating transparency. The ease and cost of storage and transfer will then not impose undue burdens on data controllers.
Such an approach does not, however, function, when the media of storage and transfer are not artificial man-made media, but rather are naturally occurring media – for example human biological samples. Two logical problems appear. First, there is no guarantee that information stored and transferred in naturally occurring media will be readily accessible to data subjects. Thus, there is no guarantee that transfer of such media to data subjects will assist with transparency. Second, information stored and transferred in naturally occurring media are tendentially not amenable to cheap and easy reproduction or transfer. Thus, the imposition of reproduction and transfer obligations on data controllers is liable to impose prohibitive costs and absurd modalities of action. The reality of data controllers needing to engage in art. 15 or 20 obligations in relation to naturally occurring media may seem unlikely. In section 15, however, we will provide concrete examples.
12 Mapping the Concept of Information as an Object of Regulation in Terms of the Relationship between Information and Human Cognition (Role 2, Axis 3) In the first instance, the concept of information as an object of regulation requires human cognitive involvement in establishing the processing context. Virtually all substantive principles in the GDPR are based on the presumption of human cognitive influence over the processing context. All core data controller obligations outlined in art. 5, for example, require that social considerations to have taken place in establishing the modalities of a processing context, for which human cognition is a prerequisite. For instance, discharge of the art. 5(1)(f) obligation concerning the confidential collection and storage of information requires human cognitive involvement in at least two ways. First, human cognitive involvement is required in defining which relational boundaries should be considered as demarking relationships of confidentiality.
[70] Second, human cognitive involvement is then required to determine the degree to which technical and organisational approaches are necessary to maintain the integrity of these boundaries.
[71] We can, however, also identify one further boundary criterion: the concept of information as an object of regulation requires human cognition to be capable of perceiving, and comprehending, the content of information being processed – even if perception never, in fact, takes place. This requirement is implicit in the effective function of a broad range of substantive provisions in the GDPR.
[72] Certain provisions require the possibility for human perception and understanding of information to ensure adequate control measures are, or have been, implemented. For example, effective discharge of the art. 5(1)(d) accuracy obligation requires the possibility for human perception and understanding of information processed to ensure information are accurate and up to date.
[73] Other provisions require the possibility for human perception and understanding of information to ensure suitable manipulation of information occurs, or has occurred. For example, art. 17 erasure requirements require individuals to implement, and confirm, information erasure. Yet other provisions require the possibility for human perception and understanding of information to ensure adequate external communication of information concerning the details and consequences of processing operation can occur. For example, data subject transparency rights stipulated under art. 14 require a controller to communicate to the data subject “the categories of personal data [being processed].” Despite the above observations, it should be highlighted that the concept of information as an object of regulation does not foresee the need for human cognition in relation to the creation or perception of each and every specific element of information processed. None of the substantive principles in the GDPR function on the basis that information must have been created or perceived by a human. In fact, there are provisions in the GDPR which relate solely to processing contexts in which information is created and processed with no direct human cognitive involvement at all. Arts. 21 and 22, for example, relate to instances in which automated profiling and decision making – situations including artificial intelligence and machine learning – are in play.
[74] These articles do not serve to diminish the applicability of other substantive principles in the GDPR, but simply offer supplemental protection when no human is engaged in the creation or perception of information.
The previous eight sections mapped the substantive boundaries of the concept of information as an applicability criterion and as an object of regulation. Considering the results of these mapping processes, we now move to compare the boundaries of the concepts occupying these two roles to highlight that the substantive boundaries of the concepts do not converge on the same substantive phenomenon.
13 The Two Concepts of Information Relate to Different Substantive Phenomena A comparison of the two concepts of information reveals multiple points of difference. We distinguish two key types of difference: first, differences in degrees of flexibility; second, differences in substantive boundaries. On the back of the consideration of points of difference between concepts, we venture a future prediction as to how differences between concepts will develop.
In terms of the flexibility of the two concepts: via a comparison of the backgrounds of the concepts, it is evident the concept of information as an applicability criterion is considerably more flexible than the concept of information as an object of regulation. The flexibility of the concept of information as an applicability criterion is extreme.
[75] Eventually, this concept of information is normatively defined by its role in “turning on” the system of protection offered under the GDPR. The concept thus potentially becomes relevant whenever risks for the individual in relation to rights in the information society are identifiable.
[76] The flexibility of the concept of information as an object of regulation, however, is limited. The flexibility of this second concept of information is tied to the flexibility of the GDPR’s substantive provisions. These are indeed imbued with a degree of flexibility. These provisions consist, however, of concepts and relationships with defined boundaries in both natural language and law. These boundaries cannot be ignored and thus serve as unavoidable restrictions on the elasticity of provisions.
In terms of the substantive boundaries of the two concepts: variation is identifiable along each of the three axes making up the structured conceptual framework differentiating concepts of information. In relation to axis 1: information as an applicability criterion encompasses all semantic information, while information as an object of regulation relates only to semantic information in the form of social facts. In relation to axis 2: information as an applicability criterion encompasses all information storage and transfer media, while information as an object of regulation relates only to media which facilitate frictionless copying and transfer of information – artificial man-made media. In relation to axis 3: information as an applicability criterion requires human involvement in setting the processing context, while information as an object of regulation additionally requires human cognitive ability to perceive and understand information.
Moving forward, as a result of differences in both flexibility and substance, we predict the substantive gap between the two concepts will become more pronounced over time. On the one hand, the scope of the concept of information as an applicability criterion will likely expand. As Purtova argues, social relationships are become increasingly informationally mediated and an increasing range of objects and processes are being perceived in terms of information. The range of social interactions, objects and processes which are capable of giving rise to threats to individuals’ rights in information thus grows accordingly.
[77] The scope of data protection law under the GDPR – and therefore the concept of information as an applicability criterion – will thus need to expand in response to this phenomenon. Data protection, after all, is specifically tasked with protecting individuals’ rights engaged by the collection and processing of their information.
[78] On the other hand, the concept of information as an object of regulation will likely remain comparatively static. This seems likely given the inherent limitations in the flexibility of the substantive principles of data protection law, as well as given the lack of legislative recognition, or effort to update, presumptions supporting the concept to date.
This section summarised the differences – in terms of both flexibility and substantive boundaries – between the two concepts of information in data protection law under the GDPR. Because of these differences, the GDPR will apply to types of information for which its substantial principles were not designed. In light of this assertion, we thus move to outline our third thesis: The substantive differences between the two concepts of information are significant for the efficacy of the GDPR as an instrument of information law.
To elaborate the thesis, we will provide examples of problems with the GDPR in relation to contemporary data processing phenomena which can be linked – at least partly – to differences between the two concepts of information. We provide one problematic example along each of the three axes comprising the structured conceptual framework differentiating concepts of information.
[79] 14 The Consequences of Differences in Concepts of Information in the GDPR: Problems with differences in concepts relating to semantic meaning (axis 1) The processing of genomic sequence information provides an example of a phenomenon in which problems emerge as a result of differences in concepts of information relating to the semantic meaning of information.
Genomic sequence information – as a form of semantic information – conforms to the salient criteria of information as an applicability criterion.
[80] Genomic sequence information does not, however, conform to the salient criteria of the concept of information as an object of regulation, as it is not in the form of social facts, but rather is in the form of unstructured information requiring further interpretation to be turned into social facts. The range of possible interpretations which may be applied to genomic sequence information at any given time – and thus the range of social facts which might be produced from genomic sequence information at any given time – depends on the state of genetic science at the time. In turn, the future development of genetic science is highly unpredictable.
[81] As Pontin has observed of genetic science’s efforts to get to grips with the function and content of the human genome: “no one will contest that the genome has turned out to be bafflingly complex.” [82] The logical problems highlighted in section 10 concerning the inability of the GDPR’s transparency provisions’ – for example those in arts. 13, 14 and 15 – one-off communication approach in dealing with unstructured information thus become reality in relation to processing involving genomic sequence information. How, for example, should a data controller processing genomic sequence information, as obliged by art. 14, give a data subject a useful list of “categories of personal data” to be processed which will remain accurate over time. The controller could, at best tell the data subject that their genomic sequence information will be processed and give an accompanying rundown of the types of social facts which can, at the moment of communication, be extracted from the sequence. Such a provision of information, however, would do nothing to address the fact that new types of socially relevant facts will become extractable from the sequence as genetic science advances.
[83] We recognise a counter-argument might be put forward suggesting that the severity of this issue will, in practise, be mitigated by common knowledge concerning the possibility to interpret information to produce social facts. From this perspective, common knowledge provides an epistemic framework, generally available to data subjects, which renders the need for anything more than a one-off communication moot. This would be a strong argument should detailed common knowledge on the interpretability of the genome sequence really be prevalent among EU citizens. This, however, seems unlikely to be the case. In this regard, Lanie et. al., in summarising their survey of public understanding of genetics and genomics, state: “this study provides…evidence…demonstrating that misconceptions about genetic science are not infrequent in the general public, and suggests the need for improved genetic literacy and understanding.” [84] 15 The Consequences of Differences in Concepts of Information in the GDPR: Problems with differences in concepts relating to media (axis 2) The processing of biological samples provides an example of a phenomenon in which problems emerge as a result of differences in concepts of information relating to the medium of information storage and transfer.
Biological samples constitute an information storage and transfer medium which fulfil the salient criteria of information as an applicability criterion. As naturally occurring media – rather than artificial man-made media – however, they do not conform to the salient criteria of the concept of information as an object of regulation. The logical problems highlighted in section 11 concerning the application of the GDPR’s data transfer provisions – for example those in art. 15 and in art. 20 – to processing involving naturally occurring media thus become reality in relation to the processing of biological samples. The provision of a copy of a biological sample to a data subject – for example to a genomic research subject – will not serve to allow the subject to better understand the processing being conducted. The subject is highly unlikely to have the means to easily access the information in the sample and even if they did, this would do little to assist them in understanding the processing taking place. At the same time, the need to replicate and transfer the sample may impose large costs on a data controller.
[85] For example, a copying process, as Mason observes, would require a “disproportionate cost” in terms of producing an immortal cell-line from the sample – through a process such as a polymerase chain reaction.
[86] A counter-argument could be put forward that discussion of this problem is based on the fallacious assumption that the handling of biological samples will fall under the scope of the GDPR. This argument is built on the fact that art. 2(1) clarifies that the GDPR only applies to personal data which is “processed wholly or partly by automated means… and to the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing.” On the back of art. 2(1), the argument then asserts that the handling of biological samples will not constitute processing “either wholly or partly by automatic means” or processing “which form[s] part of a filing system.” It is certainly true that these art. 2(1) limitations will exclude certain activities involving the handling and use of biological samples from falling within the scope of the GDPR – for example, the use of biological samples in transplantations.
Yet, we would suggest that the counter-argument fails to consider the breadth of the relevant definitions in the GDPR, and therefore cannot be accepted. In art. 4(2), the GDPR recognises the concept of processing to encompass: “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration.” Under such a definition, there are contexts in which activities involving the handling of biological samples will constitute processing. The biobanking context, for example, involves the methodical collection, recording and organisation of samples. Indeed, key definitions of biobanking – such as that provided by the National Health and Medical Research Council – explicitly highlight the activity as being defined by the collection and organisation of biological samples in a filing system.
[87] Once it has been established that biological samples can be processed, it is then a short step to recognise that biological samples can be automatically processed, or manually processed as part of a filing system – activities which are medium independent.
16 The Consequences of Differences in Concepts of Information in the GDPR: Problems with differences in concepts relating to human cognition (axis 3) The processing of personal data in neural networks provides an example of a phenomenon in which problems emerge as a result of differences in concepts of information relating to human cognition.
The processing of personal data in neural networks corresponds to the salient qualities of the concept of information as an applicability criterion. Personal data in neural networks need not, however, conform to the salient qualities of information as an object of regulation, as these networks may not permit human cognitive perception or understanding of all the information they process. As Kamarinou et. al. observe, whilst certain types of algorithmic processing – for example decision trees – allow human cognitive perception and understanding of information processed: “The situation may be very different in relation to neural network-type algorithms, such as deep learning algorithms…the conclusions reached by neural networks are ‘non-deductive and thus cannot be legitimated by a deductive explanation of the impact various factors at the input stage have on the ultimate outcome’.” [88] The logical problems highlighted in section 12 concerning the need, in most provisions of the GDPR, for human cognitive ability to perceive and understand information thus become reality in processing involving neural networks.
In this regard: how, in any processing context involving complex and opaque neural networks, could a data controller be certain to have implemented suitable and adequate control mechanisms, to have made sure correct information manipulation has taken place or to have ensured that external communication of information has occurred, when they cannot perceive or understand the information being processed? In relation to art. 5(1)(d) obligations that information be held accurately, for example, Goodman et. al. highlight the difficulty in effectively evaluating information processed in a neural network: “what hope is there of explaining the weights learned in a multilayer neural net with a complex architecture.” [89] Equally, in relation to art. 17 erasure requirements, Fosch Villaronga et. al. highlight the general difficulty in deleting data from artificial intelligence systems.
[90] This difficulty is magnified manifold when the forms and functions of information within the system are opaque to those who must perform the deletion operation.
We recognise that the issues raised by neural networks – as well as other complex artificial intelligence and machine learning processing – in terms of the effective function of pertinent provisions of the GDPR have already been framed, and discussed, at length. This is particularly the case in relation to discussions of algorithmic transparency. Relevant authors in this regard include, amongst many others, Binns, Brkan, Kaminsky, Mendoza et. al., Selbst et. al., Wachter et. al. etc.
[91] Each of these authors has considered the context, function, input and output of personal data processed in artificial intelligence systems in relation to the effective function of substantive provisions in the GDPR. We understand questions might thus be raised as to whether it makes sense to consider the problems neural networks pose to the GDPR in terms of distinctions between concepts of information: what would such an approach bring? In this regard, we do not see that a consideration of differences in concepts of information in the GDPR in relation to problems posed by neural networks poses any conceptual challenge to current discussions on algorithmic transparency. We do, however, believe the approach offers a new perspective within which to frame algorithmic transparency discussions. As highlighted by Gellert, algorithmic transparency discussions have hitherto focused overwhelmingly on the function of algorithms.
[92] In these discussions, the concept of information has been largely ignored. Further research will be necessary, however, to conclude whether considering these issues from the perspective of differences between concepts of information – as opposed to algorithmic function – will bring fresh insight to discussions.
This section showed that divergence between the two concepts of information in the GDPR leads to problems for the efficacy of the GDPR as an instrument of information law. The next section concludes the paper by looking at the legal avenues through which such problems might be addressed.
17 Legal Avenues for Addressing Problems Relating to the Divergence of Concepts of Information There are several legal approaches via which issues relating to the disparity between the two concepts of information might be addressed within the structure of the GDPR. Two are particularly important: DPA interpretation and adaptation; and Member State derogatory legislation.
[93] Each of these approaches, however, has limitations. Eventually, legislative intervention and correction may be required.
Certain problems emerging from the divergence in the two concepts of information might be addressed via the GDPR’s internal adaptive mechanisms. The GDPR foresees the possibility for the adaptation of substantive principles through national DPA – art. 57 – and EDPB – art. 70 – guidance. Such interpretation could help align the two concepts of information to address concrete problems. For example, EDPB guidance could clarify the applicability of art. 15 to biological samples such that the article no longer imposes absurd requirements on controllers. These mechanisms, however, are limited in their capacity to provide comprehensive solutions to divergences. The mechanisms are limited to the adaptation of principles already present in the GDPR. They thus have little capacity to overrule principles in the GDPR when these make no sense in relation to certain modalities of information. Nor have they the capacity to introduce new principles which may be necessary to provide supplemental protection in relation to modalities of information inadequately protected under current provisions.
[94] Certain problems could also be addressed via Member States making use of derogation possibilities to void principles of the GDPR. For example, art. 9(4) offers EU Member States the possibility to derogate from the GDPR in defining supplemental principles applicable to the processing of sensitive personal data while art. 89 permits Member States to derogate from certain substantive principles in the GDPR in relation to scientific research. Member State derogation could also help align the two concepts of information to address specific concrete problems. This approach, too, however, is subject to limitations and thus cannot provide a comprehensive solution. From a substantive perspective, there are limits to the principles from which EU Member States can derogate, the circumstances under which derogation is possible and the degree to which derogation is possible.
[95] In turn, from a legal-structural perspective, any EU Member State use of derogation possibilities would disrupt the harmonious applicability of the GDPR across Europe.
Eventually, given that the GDPR’s internal mechanisms are limited in their ability to resolve problems related to discrepancies between concepts of information, a more drastic solution comes into view: legislative intervention. The legislator would be ideally placed to introduce different strands in data protection law tailored for dealing with the different modalities of information to which the GDPR must apply. Indeed, long term, as discrepancies between the two concepts of information likely increase, and problems stemming from these discrepancies become more pronounced, legislative intervention may prove the only feasible way forward. We would observe however, that the legislator has barely, thus far, recognised the possibility that different modalities of information exist, nor that such differences may require tailored regulatory responses. In the legislative process leading up to the GDPR, for example, the idea of different modalities of information – as opposed to different actors, sectors and technologies – was scarcely thematised at all.
[96] Thus, there are possible approaches available to address the divergence in concepts of information internal to the GDPR. However, these internal mechanisms have limits. Eventually, to provide a comprehensive solution, the legislator may need to step in and recognise the existence of, and design bespoke standards of substantive protection in relation to, different modalities of information.
18 Conclusion The concept of information plays two distinct roles in the GDPR. First, the concept functions as one of the GDPR’s applicability criteria – as outlined in art. 4(1): information as an applicability criterion. Second, the concept refers to a substance around which the substantive provisions of the GDPR have been designed, and in relation to which the substantive provisions in the GDPR are intended to act: information as an object of regulation. Significantly – albeit somewhat counterintuitively – the substantive boundaries of the concepts of information occupying these two roles do not converge on the same substantive phenomenon.
The concept of information as an applicability criterion is highly flexible and relates to all semantic information, stored and transferred in any media and processed in any processing context established with human cognitive involvement. The boundaries of the concept of information as an object of regulation are more concrete and encompass only semantic information in the form of social facts, stored and transferred in media which facilitate easy and cost-effective replication and transfer of information – i.e. artificial man-made media – processed in a context set via human cognitive involvement and amenable to human cognitive perception and understanding.
Differences between the two concepts are not simply academic curiosities. The divergence is a causal factor in a range of concrete problems with the efficacy of the GDPR as an instrument of information law. The broader scope of information as an applicability criterion means that the GDPR will apply to types of information for which its substantial principles were not designed. For example, the concept of information as an applicability criterion extends to information stored in the naturally occurring media biological samples. Yet, the concept of information as an object of regulation is limited to certain types of artificial man-made media. Consequently, when the GDPR is applied to biological samples, problems ensue. These include the potential imposition of absurd obligations on data controllers – such as obligations to copy and transfer biological samples to data subjects at disproportionate cost.
Moving forward, discrepancies between the two concepts of information might be addressed in piecemeal fashion, via specific solutions targeted at specific problems. These solutions might be provided via EDPB and national DPA guidance providing interpretations of the GDPR tailored to address problems. These solutions may also be delivered via Member States using derogatory powers to disapply problematic provisions in the GDPR. In order to address the issue in a more comprehensive manner, however, direct legislative attention may be necessary. Indeed, if the GDPR is to continue to play a role as a key instrument of protection of individuals rights in modern information societies, then explicit legislative differentiation between modalities of information may eventually be necessary.
19 Acknowledgements Raphaël Gellert has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (INFO-LEG project, grant agreement No 716971).
[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
[2] See: Lee Bygrave, “The Body as Data? Biobank Regulation via the ‘Back Door’ of Data Protection Law” (2010) 2(1) Law, Innovation and Technology 1-25; Lee Bygrave, “Information Concepts in Law: Generic Dreams and Definitional Daylight” (2015) 35(1) Oxford Journal of Legal Studies 91-120; Dara Hallinan and Paul De Hert, “Many Have It Wrong – Samples Do Contain Personal Data: The Data Protection Regulation as a Superior Framework to Protect Donor Interests in Biobanking and Genomic Research” in Brent Mittelstadt and Luciano Floridi (eds.), The Ethics of Biomedical Big Data (Basel: Springer, 2016), pp. 119-139; Raphaël Gellert, “Data Protection and Notions of Information: A Conceptual Exploration” (2019) SSRN Working Paper, available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3284493 (accessed 28 February 2020).
[3] The aim of this paper is to sketch the contours of an important, albeit largely ignored, topic of research in data protection law under the GDPR: the concept of information. In this regard, the paper should not be taken as offering a final authoritative position on the exact functions, boundaries or significance of the concept in the GDPR, nor as offering specific accounts of the extent of problems caused by the concept for the function of the GDPR, nor as suggesting that the problems caused by the concept are more important than, or replace, problems caused by other concepts in the GDPR. Further clarification of such complicated definitional and relational issues requires, and deserves, much further research.
[4] See, for example: Bygrave, “The Body as Data?”, supra n. 2. Although Bygrave does indicate the existence of a second role, this is not made explicit as an object of definition and analysis.
[5] Article 29 Data Protection Working Party, “Opinion 4/2007 on the Concept of Personal Data” (WP136, 2007), pp. 6-9, available at https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2007/wp136_en.pdf (accessed 3 February 2020).
[6] Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC.
[7] As Taylor puts it, the concept of personal data functions as a “gateway to the application of data protection principles.” Mark Taylor, Genetic Data and the Law: A Critical Perspective on Privacy Protection (Cambridge: CUP, 2012), p. 77.
[8] The most systematic elaboration of the concept of information in EU data protection law is offered in the Article 29 Working party, “Opinion on the concept of personal data”, supra n. 5, pp. 6-9. There are issues with the approach in this Opinion, however. Three stand out. First, the Opinion only considers information in relation to the concept of personal data and thus in its role as an applicability criterion – the limitation of this scope will become clear as this paper progresses. Second, how the Article 29 Working Party drew up their schema for analysing the concept is unclear and eventually, misses certain key aspects of the concept such as the relationship between information and human cognition. Finally, the Opinion contains contradictions and lacks clarity.
[9] We view these axes, in no way, as being exhaustive or definitive. We appreciate the possibility for the addition of further significant axes, as well as the possibility for alternative approaches to the conceptualisation of axes. We hope that other scholars may do just this. The selection of axes relies, in particular, on the helpful breakdowns of concepts of information by Zins. Chaim Zins, “Conceptual approaches for defining data, information and knowledge” (2007) 58(4) Journal of the Association for Information Science and Technology 479-493, pp. 487-489. The selection of axes also relies on prior knowledge of the background of EU data protection law, its modus operandi and consequent assumptions as to the types of axes which might be likely to provide fruitful points of reference for analysis.
[10] See: Claude Shannon and William Weaver, The Mathematical Theory of Communication (Chicago: University of Illinois Press, 1949). Several other such approaches are also identifiable.
[11] The reader might, at this point, wonder why we have not simply used the term data when talking about unstructured information. This is a linguistic choice to avoid confusion later in the paper. Specifically, the terms information and data are used almost interchangeably in EU data protection law. Eventually, EU data protection law is unconcerned with data as such – at least in the sense the term is used in other contexts – but rather only with the potential information which may be contained within data. In order to avoid terminological confusion, we thus only talk about different degrees of structure in information.
[12] Supra n. 9, p. 488.
[13] Neil Manson, “The Medium and the Message: Tissue Samples, Genetic Information and Data Protection Legislation” in Heather Widdows and Caroline Mullen (eds.), The Governance of Genetic Information: Who Decides? (Cambridge: CUP, 2009) pp. 15-36; 20.
[14] Luciano Floridi, The Philosophy of Information (Oxford: OUP, 2011), p. 43.
[15] International Organization for Standardization, ISO/IEC 2382:2015 Information technology — Vocabulary (ISO/IEC 2382:2015 , 2015), available at https://www.iso.org/obp/ui/#iso:std:iso-iec:2382:ed-1:v1:en (accessed 3 March 2020).
[16] See, for example, Paul Griffiths and Karola Stotz, Genetics and Philosophy: An Introduction (Cambridge: CUP, 2013), pp. 153-158.
[17] Organisation for Economic Co-operation and Development Guidelines on the Protection of Privacy and Transborder Flows of Personal Data [1980], art. 1(b); Organisation for Economic Co-operation and Development Guidelines on the Protection of Privacy and Transborder Flows of Personal Data [2013] art. 1(b); Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data [1981], art. 2(a); Council of Europe Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data [2018].
[18] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, art 2(a). Even though some earlier national statutes relied upon the much narrower definition of “biographical information” – see section 9.
[19] Supra n. 5, pp. 6-9.
[20] Supra n. 5, p. 5.
[21] Peter Nowak v Data Protection Commissioner, C‑434/16, [2017] ECLI:EU:C:2017:994, paras 33-35 (hereinafter Nowak ).
[22] See, for example, Worku Gedefa Urgessa, “The Protective Capacity of the Criterion of ‘Identifiability’ under EU Data Protection Law” (2016) 2(4) European Data Protection Law Review 521-531, p. 521.
[23] Commission of the European Communities, Commission Communication on the protection of individuals in relation to the processing of personal data in the Community and information security (COM(90) 314 final, 1990), p. 19, available at http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:51990DC0314&from=EN (accessed 4 March 2020).
[24] Nowak n. 21, para. 34.
[25] Supra n. 5, p. 6.
[26] Bygrave, “Information Concepts in Law” supra n. 2, p. 112. See also: Raphaël Gellert, “Organising the regulation of algorithms: comparative legal lessons” (2019) Presentation given at the TILTing 2019 Conference.
[27] S. and Marper v United Kingdom , app. no. 30562/04 and 30566/04, [2008], para. 73 (hereinafter Marper ). We recognise that the case did not explicitly use the term “unstructured genomic information.” The case did, however, deal with cellular samples which the Court recognised as being of significance in relation to the individual’s private life as these contain the raw genomic code – in DNA. This is raw form genomic information which requires further analysis through an interpretative framework – provided by genetic science – to extract information with social significance about an individual. Hence, the raw genomic code might be referred to as unstructured genomic information. For example, in order to extract information from an individual’s genome as to whether that individual has a genetic predisposition to contract Huntington’s disease, an interpretative framework based around the detection of a mutation in the HTT gene would need to be applied. See: David Craufurd et al., “Diagnostic genetic testing for Huntington’s disease” (2015) 15(1) Practical Neurology 80-84, p. 80. Indeed, the Court specifically recognised the significance of the raw genomic code for individuals’ private life due to the possibility to subject the code to different types of interpretative framework to produce different types of socially significant factual information about those individuals – and indeed their relatives. The Court stated, for example: “In addition to the highly personal nature of cellular samples, the Court notes that they contain much sensitive information about an individual, including information about his or her health. Moreover, samples contain a unique genetic code of great relevance to both the individual and his relatives. In this respect the Court concurs with the opinion expressed by Baroness Hale in the House of Lords (see paragraph 25 above)” – the opinion with which the Court was agreeing was the following: “the retention of both fingerprint and DNA data constituted an interference by the State in a person’s right to respect for his private life and thus required justification under the Convention. In her opinion, this was an aspect of what had been called informational privacy and there could be little, if anything, more private to the individual than the knowledge of his genetic make-up.” Paras 71 and 25.
[28] Opinion of Advocate General Sharpston in YS v Minister voor Immigratie, Integratie en Asiel and Minister voor Immigratie, Integratie en Asiel v M and S, Joined Cases C‑141/12 and C‑372/12, [2013], para. 56. See also Sandra Wachter and Brent Mittelstadt, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Inferences and Big Data” (2019) 2019(2) Columbia Business Law Review 494-620, pp. 521-531.
[29] Nowak n. 21, para. 53.
Nowak is welcome for consistency and doctrinal integrity. In terms of consistency, it would be hard to reconcile the possibility for uninterpreted datasets, as well as facts, to fall within the scope of information as an applicability criterion whilst opinions and inferences could not. In terms of doctrinal integrity, the goal of EU data protection law is doubtless relevant in relation to opinions and inferences. See also Dara Hallinan and Frederik Zuiderveen Borgesius, “Opinions can be incorrect (in our opinion)! On data protection law’s accuracy principle” (2020) International Data Privacy Law (forthcoming).
[30] See Dara Hallinan, Feeding Biobanks with Genetic Data: What role can the General Data Protection Regulation play in the protection of genetic privacy in research biobanking in the European Union? (Brussels: Vrije Universiteit Brussel, 2018), p. 99.
[31] In the case of Ryneš, the Court stated: “It should be noted that, under Article 3(1) of Directive 95/46, the directive is to apply to ‘the processing of personal data wholly or partly by automatic means, and to the processing otherwise than by automatic means of personal data which form part of a filing system or are intended to form part of a filing system’…Accordingly, the image of a person recorded by a camera constitutes personal data within the meaning of art. 2(a) of Directive 95/46 inasmuch as it makes it possible to identify the person concerned.” František v Úřad pro ochranu osobních údajů,para, Case C‑212/13, [2014] ECLI:EU:C:2014:2428, paras 20-22.
In the case of Buivids, the Court stated: “In the present case, it is apparent from the order for reference that it is possible to see and hear the police officers in the video in question, with the result that it must be held that those recorded images of persons constitute personal data within the meaning of art. 2(a) of Directive 95/46.” Sergejs Buivids, Case C–345/17, [2019] ECLI:EU:C:2019:122, para. 25.
[32] Supra n. 5, p. 7.
[33] Supra n. 5, p. 9.
[34] Bygrave, “The Body as Data?”, supra n. 2, p. 20.
[35] Herman Nys, “Report on the Implementation of Directive 95/46/EC in Belgian Law” in Deryck Beyleveld, David Townend, Ségolène Rouillé-Mirza and Jessica Wright (eds.), Implementation of the Data Protection Directive in Relation to Medical Research in Europe (Aldershot: Ashgate, 2004) pp. 29-41, p. 41.
[36] See the discussion as to how biological samples, and the DNA they contain are conceptualised in popular metaphors and in genetic science in Hallinan and De Hert, “Many Have It Wrong”, supra n. 2, pp. 131-133.
[37] See, for example, George Church, Yuan Gao, and Sriram Kosuri, “Next Generation Digital Information Storage in DNA” (2012) 337(6102) Science 1628, p. 1682.
[38] See, for example, the recognition of these arguments in Bygrave, “The Body as Data?”, supra n. 2, pp. 14-16.
[39] For a more extensive discussion as to the lack of proof of any intention to use a concept of information in informatics as the template for the concept of information in the GDPR, see Hallinan and De Hert, “Many Have It Wrong”, supra n. 2, pp. 133-134.
[40] Supra n. 9, pp. 485-486.
[41] The International Organization for Standardization define data as: “A reinterpretable representation of information in a formalized manner suitable for communication, interpretation, or processing…Data can be processed by humans or by automatic means.” Supra n. 15. For an extensive discussion of the way in which DNA falls within the definition of information in ISO 2382-1, see Hallinan and De Hert, “Many Have It Wrong”, supra n. 2, p. 134.
[42] Marper , supra n. 27, para. 68.
[43] Gaughran v United Kingdom , app. no.
45245/15 , [2020], para. 63; Trajkovski and Chipovski v North Macedonia , app. no.
53205/13 and 63320/13 , [2020], para. 43.
[44] It should be noted that, in the case, the Court did highlight that cellular samples may not always constitute personal data. The Court recognised that biological samples would only constitute personal data if they were able to fulfil all the criteria of the concept of personal data outlined in the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (1981): “The Court notes at the outset that all three categories of the personal information retained by the authorities in the present case…DNA profiles and cellular samples, constitute personal data within the meaning of the Data Protection Convention as they relate to identified or identifiable individuals. The Government accepted that all three categories are “personal data” within the meaning of the Data Protection Act 1998 in the hands of those who are able to identify the individual.” Marper n. 27, para. 68. With these observations, the Court suggested that biological samples need not always be identifiable and that, accordingly, they need not always be personal data. We would highlight, however, that this recognition does not alter the fact that, by suggesting cellular samples can, in some cases, be personal data, the Court recognised that cellular samples will always – provided they contain DNA – constitute information. As discussed above, in section 4, if a substance can, in principle, fulfil the information criterion of personal data, but then cannot fulfil the other criteria of personal data – for example if a substance is not identifiable in a specific case – this does not have the effect of altering its classification as information. This only has the effect of altering the substance’s context specific classification as personal data by virtue of its failure to fulfil other criteria.
[45] In relation to the statements in the Marper case, for example, Bygrave sums up the sentiments behind this argument as follows: “The finding by the ECtHR that samples constitute personal data is…remarkable for its brevity…in formulation and…reasoning.” Bygrave, “The Body as Data?”, supra n. 2, p. 8.
[46] See, for example, in this overview the extensive history of cases concerning personal data before the ECtHR: European Court of Human Rights, Personal data protection (2020), available at https://www.echr.coe.int/Documents/FS_Data_ENG.pdf (accessed 24 February 2020).
[47] Art. 2(a) of the Convention reads: “’personal data’ means any information relating to an identified or identifiable individual (‘data subject’).” Compare this with the definition provided in art. 4(1) of the GDPR: “‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’).” [48] See for example Marper , supra n. 27, as discussed in sections 6 and 7.
[49] In clarifying that the Data Retention Directive concerned the retention of personal data, the Court stated: “By requiring the retention of the data listed in art. 5(1) of Directive 2006/24 and by allowing the competent national authorities to access those data, Directive 2006/24, as the Advocate General has pointed out, in particular, in paragraphs 39 and 40 of his Opinion, derogates from the system of protection of the right to privacy established by Directives 95/46 and 2002/58 with regard to the processing of personal data in the electronic communications sector.” Digital Rights Ireland Ltd v Minister for Communications and others, and Kärntner Landesregierung, Joined Cases C‑293/12 and C‑594/12 [2014] ECLI:EU:C:2014:238, para. 32 (hereinafter Digital Rights Ireland ).
[50] Article 29 Working Party, “Opinion 2/2010 on Online Behavioural Advertising” (WP 171, 2010), p. 9.
[51] Contrary to Bygrave’s interpretation of the ISO definition of information, which seems to put the focus exclusively on human cognition following the processing of data – as the carrier of information – cognition or knowledge representation are also an integral element of machine learning, so that computer agents can conduct automated reasoning. Bygrave, “Information Concepts in Law” supra n. 2, p. 91; Tim Berners-Lee, James Hendler and Ora Lassila, “The Semantic Web: A New Form of Web Content That Is Meaningful to Computers Will Unleash a Revolution of New Possibilities” Scientific American (New York, May 2001); Qihui Wu et al., “Cognitive Internet of Things : A New Paradigm Beyond Connection” (2014) 1(2) IEEE Internet of Things Journal 129-143. One can, therefore, point to some confusion around “the meaning of meaning” and of cognition. When a human sees the result of a data processing on the screen of the device, this will constitute information provided that the cognition process at the human level is successful. However, and regardless of that, cognition will have taken place at the level of the very processing itself. It is therefore important to distinguish between human and computer cognition, which do not overlap. See, for example: Frederik Zuiderveen Borgesius, “Personal data processing for behavioural targeting: which legal basis?” (2015) 5(3) International Data Privacy Law 163-176, p. 165. Indeed, from a more historical perspective, one could even consider the expanding scope of the notion of personal data from the perspective of human cognition. Earlier definitions of personal data, such as that adopted in the original French Data Protection Act, solely referred to “biographical information” (from the original French: “information nominative”). At this stage, the overlap between machine and human cognition was arguably total. However, with advances in computing, one can argue that the definition of personal data retained in the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, and subsequently Directive 95/46, made room for a non-human cognition aspect. See Jessica Eynard, Les Données Personnelles: Quelle Définition Pour Un Régime de Protection Efficace? (Paris, Michalon 2013), p. 11.
[52] This role for the concept of information logically relates to the first role for the concept of information – information as an applicability criterion. The fact the GDPR only applies to substances which qualify as information provides the rationale for legislative consideration of information as an object of regulation. Accordingly, it should be noted that the concept of information as an object of regulation will not have been the only criterion the legislator will have had in mind when designing substantive provisions. Other art. 2(1) and 4(1) criteria will also have played a role. None of the GDPR’s provisions on consent – for example art. 4(11) – elaborate what should happen in the case of a data subject’s death. The reason is that the art. 4(1) applicability criterion of natural person excludes the deceased and thus the need to design provisions dealing with data protection and the deceased. See, for a discussion of the boundaries of the concept of natural person as well as the protection of post-mortem privacy under EU data protection law: Edina Harbinja, “Does EU data Protection Regime Protect Post-Mortem Privacy and what could be the Potential Alternatives?” (2013) 10(1) SCRIPTed 19-38, p. 27.
[53] That personal data processing must always have a legitimation under art. 6 or art. 9 has been repeatedly confirmed in CJEU jurisprudence. See, for example: Google Spain SL, Google Inc. v Agencia Española de Protección de Datos and Mario Costeja González, Case C-131/12, [2014] ECLI:EU:C:2014:317, para. 71 (hereinafter Google Spain ). This case also references a long history of CJEU case law confirming the point. See, for example: Worten – Equipamentos para o Lar SA v Autoridade para as Condições de Trabalho (ACT) , Case C‑342/12, [2013] ECLI:EU:C:2013:355, para. 33 (hereinafter Worten ).
[54] See for a discussion of the two types of legitimation ground: Omer Tene and Christopher Wolf, The Draft EU General Data Protection Regulation: Costs and Paradoxes of Explicit Consent (Future of Privacy Forum White Paper, 2013), p. 2, available at http://www.scribd.com/doc/121642539/The-Draft-EU-General-Data-Protection-RegulationCosts-and-Paradoxes-of-Explicit-Consent (accessed 03 May 2019).
[55] That personal data processing must always adhere to the data protection principles outlined in art. 5 has also been repeatedly confirmed in CJEU jurisprudence. See, for example, Google Spain , supra n. 53, para. 71. This case also references a long history of CJEU case law confirming the point. See, for example, Worten , supra n. 53, para. 33.
[56] For a general discussion of the novelty of substantive provisions, see Christopher Kuner, “The European Commission’s Proposed Data Protection Regulation: A Copernican Revolution in European Data Protection Law” (2012) Bloomberg BNA Privacy and Security Law Report 1-15; Paul De Hert and Vagelis Papkonstantinou, “The new General Data Protection Regulation: Still a sound system for the protection of individuals?” (2016) 32(2) Computer Law and Security Review 179-194.
[57] Gloria González Fuster, The Emergence of Personal Data Protection as a Fundamental Right of the EU (Heidelberg: Springer, 2014), p. 84.
[58] See, for example, a discussion of EU data protection law as omnibus legislation in relation to the medical research context: Roberto Lattanzi, “Data Protection Principles and Research in the Biobanks Age” in Deborah Mascalzoni (ed.), Ethics, Law and Governance of Biobanking (Dordrecht: Springer, 2015), pp. 79-93, p. 85.
[59] See for a general discussion of the rationale and necessity of the flexibility of data protection principles in relation to changing technological and social consequences and changing risks to individuals’ rights: Paul De Hert, “The Future of Privacy. Addressing Singularities to Identify Bright-Line Rules That Speak to Us” (2016) 2(4) European Data Protection Law Review 461-466.
[60] These powers even extend to the interpretation and adaptation of provisions considering novel technological and social challenges. For example: art. 70(1)(e) grants the EDPB the power to “[examine]…any question covering the application of this Regulation”; art. 58(3)(b) grants national DPAs broad discretionary powers to: “issue…opinions…on any issue related to the protection of personal data.” [61] Supra n. 30, pp. 403-405.
[62] We recognise that the methodology we use in this mapping process is somewhat unusual – particularly in a legal paper. However, we believe the methodology is both justified and unavoidable. We also recognise that, by our logic, an argument could be made for looking to map concepts of information along the three axes in relation to each different substantive principle. This possibility is a subject which should be followed up in further research.
[63] See, for example, Gregory Chaitin, Algorithmic Information Theory (Cambridge: CUP, 1987).
[64] Bygrave, “Information Concepts in Law” supra n. 2, p. 112.
[65] Indeed, these provisions have been highlighted by certain authors as constituting the core of the protection outlined by European data protection law. Deryck Beyleveld, “An Overview of Directive 95/46/EC in Relation to Medical Research” in Deryck Beyleveld, David Townend, Ségolène Rouillé-Mirza and Jessica Wright (eds.), The Data Protection Directive and Medical Research Across Europe (Aldershot: Ashgate, 2004), pp. 5-23, p. 11.
[66] As the Article 29 Working Party stated: “A central consideration of the principle of transparency outlined in these provisions is that the data subject should be able to determine in advance what the scope and consequences of the processing entails and that they should not be taken by surprise at a later point about the ways in which their personal data has been used.” Article 29 Working Party, Guidelines on transparency under Regulation 2016/679 (WP260 rev.01, 2017 (revised 2018)), p. 7. In this regard, we would argue the information provided should allow the data subject to understand the scope and consequences of processing from a range of perspectives, including: (i) how the processing will impact the data subject’s life – which types of actors are likely to make which significant judgments, in which contexts and with which likely outcomes for the data subject; (ii) the potential risks associated with the processing; and (iii) the range of options the subject has to actively influencing the processing – which rights the subject has in relation to processing and how these might be used.
[67] As Albers observes: “data are not meaningful per se, but rather as ‘potential information’.” Marion Albers, “Realizing the Complexity of Data Protection” in Serge Gutwirth, Ronald Leenes and Paul De Hert (eds.), Reloading Data Protection (Dordrecht: Springer, 2014), pp. 213-235, p. 222.
[68] See, for example, for a discussion of the specific consequences and risks to data subjects in the processing of sensitive types of personal data – including, according to art. 9(1), personal data concerning “sex life or sexual orientation” – in the context of online behavioural advertising: Information Commissioner’s Office, Update Report into adtech and real time bidding (Report, 2019), p. 16, available at https://ico.org.uk/media/about-the-ico/documents/2615156/adtech-real-time-bidding-report-201906.pdf (accessed 6 March 2020).
[69] This is indicated by the legislator’s express efforts to preclude the need for extreme resource deployment in the discharge of these rights. In relation to access rights: art. 15(3) recognises the right of the data controller to avoid such expense in levying charges on the data subject for the provision of any more than one copy of their personal data: “For any further copies requested by the data subject, the controller may charge a reasonable fee based on administrative costs.” In relation to portability rights: Recital 68 relieves data controllers from the need to adopt special systems to ensure common formats across data controller systems: “The data subject’s right to transmit or receive personal data concerning him or her should not create an obligation for the controllers to adopt or maintain processing systems which are technically compatible.” See also Article 29 Working Party, Guidelines on the right to data portability (WP 242, 2016), pp. 13-14.
[70] See, for example, the social calculations involved in confidentiality requirements in UK medical law: Nick Nicholas, “Risk management: Confidentiality, disclosure and access to medical records” (2007) 9 The Obstetrician and Gynaecologist 257-263, p. 258.
[71] Frederik Zuiderveen Borgesius and Dara Hallinan, “Article 5” in Franziska Boehm and Mark Cole (eds.), GDPR Commentary (Cheltenham: Elgar, Forthcoming 2020).
[72] Indeed, the only provisions for which this is not true are those related to activities to be carried out prior to processing – for example provisions relating to the obligation to conduct a data protection impact assessment in art. 35.
[73] Jiahong Chen, “The Dangers of Accuracy: Exploring the Other Side of the Data Quality Principle” (2018) 4(1) European Data Protection Law Review 36-52, pp. 37-38.
[74] See Antoni Roig, “Safeguards for the right not to be subject to a decision based solely on automated processing (art. 22 GDPR)” (2017) 8(3) European Journal of Law and Technology , p. 2.
[75] This recognition has a further consequence which deserves more extensive discussion elsewhere: this concept of information may map to phenomena not corresponding to traditional understandings of information at all. Certain effects with which EU data protection law is concerned relate to presumptions of information processing. One example would be chilling effects. The CJEU observed the relevance of chilling effects in relation to information processing in the Digital Rights Ireland case.
Digital Rights Ireland , supra n. 49, para. 28. There may be cases in which no information processing actually occurs and yet chilling effects risks related to information processing are still relevant. For example, dummy camera systems: no information processing occurs, but the systems engage chilling effects risks concerned with the presumptions of information processing. In this case, the phenomenon constituent of the relationship between the data subject and the data controller is not informational, but doxastic. It has thus been suggested that the concept of information, via teleological interpretation, could also extend to doxastic relationships. See, for a discussion of this possibility, Dara Hallinan, “Data Protection without Data: Could Data Protection Law Apply without Personal Data Being Processed?” (2019) 5(1) European Data Protection law Review 293-299.
[76] Recall the CJEU observation in the Nowak case – see section 5: “The use of the expression ‘any information’ in the definition of the concept of ‘personal data’, within Article 2(a) of Directive 95/46, reflects the aim of the EU legislature to assign a wide scope to that concept…potentially encompasses all kinds of information…provided that it ‘relates’ to the data subject.” Nowak , supra n. 21, para. 34.
[77] See, for example, Nadezhda Purtova, “The law of everything. Broad concept of personal data and future of EU data protection law” (2018) 10(1) Law, Innovation and Technology 40-81, p. 43.
[78] This is not to say there are no other areas of law which play a role in the protection of individuals’ rights in information – the rights to privacy and to freedom of information and duties of confidentiality, for example. Our thanks to anonymous reviewer 2 for pointing this out.
[79] We would like to highlight that we are in no way suggesting that issues with the lack of consideration, or clarity in the substantive definition, of the concept of information are definitive for the range of possible problems with the GDPR. Nor do we wish to suggest that problems which may be framed in terms of differences in information cannot also be framed – perhaps much more fruitfully – in other ways. Rather, we aim to highlight that the different conceptualisations of information active in the GDPR play some role in the efficacy of the GDPR as an instrument of law. With this observation, we hope to spark further research into the conceptualisation, and the significance of the conceptualisation, of information in EU data protection law.
[80] This assertion has been repeatedly confirmed in jurisprudence. The Article 29 Working Party for example, have stated: “genetic data are doubtlessly ‘personal data’.” Article 29 Working Party, Working Document on the processing of personal data relating to health in electronic health records (EHR) (WP 131, 2007), p. 7.
[81] It seems all but inevitable that further scientific advance will follow, leading to the ability to extract yet more facts about individuals from the genome sequence. For a discussion see Chris Tyler-Smith et al., “Where Next for Genetics and Genomics?” (2015) 13(7) PLOS Biology.
[82] Jason Pontin, “A Decade of Genomics: On the 10th anniversary of the Human Genome Project, we ask: where are the therapies?” ( MIT Technology Review , 21 December 2010), available at https://www.technologyreview.com/s/422130/a-decade-of-genomics/ (accessed 10 March 2020).
[83] Indeed, in this vein, there have already been discussions of the inadequacy of one-off communications models in relation to informed consent in the processing of genomic sequence information in genomic research. See Christine Grady et al., “Broad Consent For Research With Biological Samples: Workshop Conclusions” (2015) 15(9) American Journal of Bioethics 34-42, p. 43.
[84] Angela Lanie et. al., “Exploring the Public Understanding of Basic Genetic Concepts” (2004) 13(4) Journal of Genetic Counselling 305-320, p. 318.
[85] Supra n. 30, pp. 380-385.
[86] “The data subject could be given a sample of ‘relevant’ genetic material amplified by polymerase chain reaction (though at disproportionate cost!).” Neil Mason, “The medium and the message: tissue samples, genetic information and data protection legislation” in Heather Widdows and Caroline Mullen (eds.
), The Governance of Genetic Information: Who Decides? (Cambridge: CUP, 2009) pp. 15-36, p. 29. A transfer process may also require specially designed transport facilities to effectively move the biological sample – such as refrigerated vehicles. Kunkel et. al., for example, observe that transport of certain biological samples would require the “maintenance of ultra-low conditions at all stages during transport…obtained with high-quality packaging and dry ice or liquid nitrogen in quantities sufficient to last during unforeseen delivery delays.” Eric Kunkel, Rolf Ehrhardt, “Frozen Assets – An Expert Guide to Biobanking” ( Select Science , 23 December 2014), available at http://www.selectscience.net/editorial-articles/frozen-assets–an-expert-guideto-biobanking/?artID=35743. Last consulted: 20.04.2018 (accessed 9 March 2019).
[87] See, for example, the definition for biobanking provided in: National Health and Medical Research Council, Biobanks Information Paper (E110, 2010), p. 7, available at https://www.nhmrc.gov.au/about-us/publications/biobanks-information-paper (accessed 20 February 2020).
[88] Dimitra Kamarinou, Christopher Millard, and Jatinder Singh, “Machine Learning with Personal Data” (Queen Mary University of London, School of Law Legal Studies Research Paper 247/2016, 2016), p. 19. The authors cite David Warner Jr., “A Neural Network-based Law Machine: The Problem of Legitimacy” (1993) 2(2) Law, Computers & Artificial Intelligence 135-147, p. 138.
[89] Bryce Goodman and Seth Flaxman, “EU regulations on algorithmic decision-making and a ‘right to explanation’”, (2016) ICML Workshop on Human Interpretability in Machine Learning, p. 29, available at http://metromemetics.net/wp-content/uploads/2016/07/1606.08813v1.pdf (accessed 10 March 2020). The authors are, in this paper, discussing the right to an explanation under the GDPR in relation to artificial intelligence. The observation, however, is also relevant in this content.
[90] Eduard Fosch Villaronga, Peter Kieseberg, Tiffany Li, “Humans forget, machines remember: Artificial intelligence and the Right to Be Forgotten” (2018) 34 Computer Law and Security Review 304-313, pp. 308-309.
[91] Reuben Binns, “Algorithmic Accountability and Public Reason” (2018) 31 Philosophy and Technology 543-556; Maja Brkan, “Do Algorithms Rule the World? Algorithmic Decision-Making and Data Protection in the Framework of the GDPR and Beyond” (2019) 27(2) International Journal of Law and Information Technology 91-121; Margot Kaminski, “The Right To Explanation, Explained” (2019) 34 Berkeley Technology Law Journal 189-218; Isak Mendoza and Lee Bygrave, “The Right Not to Be Subject to Automated Decisions Based on Profiling” in Tatiana-Eleni Synodinou, Philippe Jougleux, Christiana Markou, and Thalia Prastitou (eds.), EU Internet Law: Regulation and Enforcement (Cham: Springer, 2017), pp. 77-98; Andrew Selbst and Julia Powles, “Meaningful information and the right to explanation” (2017) 7(4) International Data Privacy Law 233–242; Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation” (2017) 7(2) International Data Privacy Law 76-99.
[92] As Gellert also argues, data protection law currently regulates machine learning by bypassing the crucial aspect of learning and the informational concepts this presupposes. Gellert, “Data Protection and Notions of Information”, supra n. 2, p. 20.
[93] We also recognise that Courts – at national and EU level – can also function as important mechanisms for the resolution of issues created by divergences between concepts of information in the GDPR. We refrain from discussing their role in this regard, however, owing to the fact that the likelihood of specific cases landing before national or EU courts dealing with these specific issues is hard to predict. As a result, it is hard to assert that Courts will be in the position to regularly act as a mechanism for the resolution of issues. For example, only very few cases dealing with issues concerning biological samples as information have ever come before courts in the EU.
[94] There is no discussion of the ability to add to, or to exclude, the applicability of substantive provisions of EU data protection law in the outline of the powers held by DPAs or by the EDPB in art. 57 and art. 70, respectively, of the GDPR.
[95] See, for example, the possibilities, and discussion of uncertainties, in relation to Member State derogations under art. 89: Stephan Pötters, “Artikel 89” in Peter Gola (ed.), DS-GVO DatenschutzGrundverordnung VO (EU) 2016/679 Kommentar (2nd ed.) (Munich: Beck 2017), pp. 990-999.
[96] See, for example, the document initiating the reform process leading to the GDPR and its lack of reference to the various possible modalities of information as an issue to be addressed: European Commission, A comprehensive approach on personal data protection in the European Union (COM(2010) 609 final, 2010), pp. 2-5, available at https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2010:0609:FIN:EN:PDF (accessed 6 March 2020).
The Concept of ‘Information’: An Invisible Problem in the GDPR August 6, 2020 No Comments ← Editorial introduction Biomedical Data Identifiability in Canada and the European Union: From Risk Qualification to Risk Quantification? → Leave a Reply Cancel reply Your email address will not be published.
Required fields are marked * Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam.
Learn how your comment data is processed.
Home About SCRIPTed Submission Guidelines Blog Books for Review Sitemap Contact us Recent blog posts Timed influence: The future of Modern (Family) life and the law September 10, 2021 Predicting Innovation: Why Facebook/WhatsApp Merger Flunked April 1, 2021 Georgia vs. Public.Resource.org: The Morning After June 15, 2020 Online harms and Caroline’s Law – what’s the direction for the law reform? April 13, 2020 SCRIPTed is turning 15! January 14, 2019 Issues About SCRIPTed Submission Guidelines Blog Sitemap Contact us {{FOOTNOTENUM}}.
" |
181 | 2,020 | "Processing Data to Protect Data: Resolving the Breach Detection Paradox – SCRIPTed" | "https://script-ed.org/article/processing-data-to-protect-data-resolving-the-breach-detection-paradox" | "A Journal of Law, Technology & Society https://script-ed.org?p=3883 SCRIPTed A Journal of Law, Technology & Society Menu About SCRIPTed Editorial Board Submission Guidelines Authors Artists Book Reviewers Blog posts Copyright policy Books for Review Issues Current Issue All issues Blog Contact us All issues > Volume 17 > Issue 2 > Processing Data to Protect Data: Resolving the Breach Detection Paradox Volume 17 , Issue 2 , August 2020 Processing Data to Protect Data: Resolving the Breach Detection Paradox Andrew Cormack* Download PDF © 2020 Andrew Cormack Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Abstract Most privacy laws contain two obligations: that processing of personal data must be minimised, and that security breaches must be detected and mitigated as quickly as possible. These two requirements appear to conflict, since detecting breaches requires additional processing of logfiles and other personal data to determine what went wrong. Fortunately Europe’s General Data Protection Regulation (GDPR) – considered the strictest such law – recognises this paradox and suggests how both requirements can be satisfied. This paper assesses security breach detection in the light of the principles of purpose limitation and necessity, finding that properly-conducted breach detection should satisfy both principles. Indeed the same safeguards that are required by data protection law are essential in practice for breach detection to achieve its purpose. The increasing use of automated breach detection is then examined, finding opportunities to further strengthen these safeguards as well as those that might be required by the GDPR provisions on profiling and automated decision-making. Finally we consider how processing for breach detection relates to the context of providing and using on-line services concluding that, far from being paradoxical, it should be expected and welcomed by regulators and all those whose data may be stored in networked computers.
Keywords Data protection; breach detection; incident response Cite as: Andrew Cormack, "Processing Data to Protect Data: Resolving the Breach Detection Paradox" (2020) 17:2 SCRIPTed 197 https://script-ed.org/?p=3883 DOI: 10.2966/scrip.170220.197 * Chief Regulatory Adviser, Jisc, Didcot, UK, Andrew.Cormack@jisc.ac.uk 1 Introduction: the need for breach detection One of the core principles of data protection – whether expressed in the European General Data Protection Regulation (GDPR), [1] the Council of Europe Convention 108, [2] or the FTC Fair Information Practice Principles [3] – is that the processing of personal data should be minimised. However all of those documents also demand that personal data must be protected, including more or less explicit requirements to be able to detect, investigate and mitigate the impact of security breaches. The benefits of a data controller reducing harm in its own processing will quickly be lost if a malicious intruder can gain undetected access to the data and cause havoc.
Even for information on paper, detection and investigation requires the collection and retention of additional personal data, such as records of who was authorised to access files and who (including unauthorised persons) actually did. For information stored in digital form on networked computers, the corresponding records of accesses and attempted accesses may involve very large collections of data. Without such logs it will be much harder to detect breaches and impossible to analyse and contain their impact. This creates a paradox: that protecting personal data against security breaches requires data controllers to collect and process more, not less, personal data. This paper demonstrates not only that these breach detection activities can be done fully in accordance with the strict requirements of the GDPR but that they should be seen as both necessary and reassuring by data subjects, data controllers, and regulators. Organisations that do not process data to detect and mitigate breaches should be a much greater concern than those that do.
The growing importance of protecting digital information and the systems that contain it was stressed at the 2017 launch of the EU Cybersecurity Act: With recent ransomware attacks, a dramatic rise in cyber-criminal activity, the increasing use of cyber tools by state actors to meet their geopolitical goals and the diversification of cybersecurity incidents, the EU needs to build a stronger resilience to cyber-attacks.
[4] Resilience has two main components: reducing the number of attacks that succeed (prevention) and reducing the impact of those that do (detection and recovery): thus Recital 25 of the Act seeks to help Member States and Union institutions “to prevent, detect and respond to cyber threats and incidents”.
[5] Likewise, while Article 32 of the General Data Protection Regulation (GDPR) requires that anyone processing personal data must take “appropriate technical and organisational measures” to prevent security breaches, the parallel requirement in Article 33 to notify breaches recognises that prevention alone is not enough. Recital 85 is explicit that to avoid “physical, material or non-material damage” to individuals, organisations must also be able to respond to breaches “in an appropriate and timely manner” when they occur.
[6] The Article 29 Working Party’s guidance on Breach Notification, endorsed by the European Data Protection Board, [7] confirms that “the ability to detect, address, and report a breach in a timely manner” is an “essential element” of the Article 32 duty.
[8] This dual requirement is now a common pattern in European legislation: sectors where breaches may cause disruption to society, rather than directly affecting personal data, are also required to have detection and response measures alongside their preventive ones, for example by Chapters IV and V of the Network and Information Security Directive covering energy, transport, banking, financial markets, health, water and digital infrastructures; [9] Article 19 of the eIDAS Regulation [10] covering electronic identification and trust services; and Article 4(3) of the amended ePrivacy Directive [11] covering electronic communications. Detection and response are as important as prevention.
The Article 29 Working Party outlines what is involved in detecting and analysing breaches: “For example, for finding some irregularities in data processing the controller or processor may use certain technical measures such as data flow and log analysers, from which [it] is possible to define events and alerts by correlating any log data”.
[12] The mention of irregularities and correlations indicates a need to consider both historical and contextual information: breaches will often be detected as a divergence from normal behaviour or as a group of events happening around the same time. Such activities therefore involve additional processing beyond that required to service individual transactions: for example in Breyer v Germany the European Court of justice recognised that “aiming to ensure the general operability of those [web] services” might require retaining and using logs after the completion of the transactions to which they referred.
[13] More specifically, the European Network and Information Security Agency’s (ENISA) 2011 report identified the “must-have tools” for detection of network security breaches as “firewalls, antivirus (alerts), IDS/IPS and NetFlow”; [14] for analysing security breaches, the Forum of Incident Response and Security Teams (FIRST) consider that relevant sources may include “Netflow data, Router logs, Proxy server logs, Web application logs, Mail server logs, DHCP server logs, Authentication server logs, Referring databases, Security equipment, such as firewall or intrusion detection logs”.
[15] Both the example of website logs in Breyer and the longer lists from ENISA and FIRST indicate that the information needed to detect and investigate breaches is likely to be already held – if only briefly – by the organisation that operates the online service. To send and receive packets, a networked computer must process the Internet Protocol (IP) header data from which netflow, router, firewall and Intrusion Detection/Protection System logs are derived; to deliver a web or email service, it must process the application headers that are recorded in logfiles; to safely connect local user devices it must provide DHCP, anti-virus and proxy services; to provide authentication it must maintain user accounts. A key feature of these information sources – in legal terms – was noted as long ago as 2003: that, to be useful, they must contain Internet Protocol (IP) addresses and timestamps.
[16] Under European law, at least, this means they are likely to constitute personal data, so breach detection and analysis – involving recording, retaining and processing these data sources – will itself be subject to the GDPR.
[17] Fortunately both legislation and case law are aware of this: Recital 49 of the Regulation recognises a need for “processing of personal data to the extent strictly necessary and proportionate for the purposes of ensuring network and information security”; Breyer recognised that retaining and processing logs to detect and investigate attacks might be lawful.
[18] Thus a resolution of the breach detection paradox should be possible: where additional personal data processing is necessary to protect personal data, the law both requires and permits this.
Breach detection involves processing, for a second purpose, of personal data that the service or network operator must already process for the primary purpose of providing their service. Under the GDPR, the first two principles to consider in such situations are the relationship between the two purposes, “purpose limitation”, [19] and the necessity of the additional processing for the second purpose, “data minimisation”.
[20] The results of that inquiry will then guide compliance with the remaining principles. This paper therefore examines breach detection from the perspectives of purpose and necessity. Since automation is increasingly needed to handle the growing volume of data relevant to breach detection, we then investigate how this may affect the purpose and necessity analysis, and what additional requirements may result from the GDPR’s specific provisions on profiling and automated decision making. Finally we conclude that breach detection is not only compatible with the GDPR, but should be welcomed and expected by regulators, operators and data subjects as a key part of the provision of any internet-connected system or service.
2 Purpose The Article 29 Working Party considers that Specification of purpose is an essential first step in applying data protection laws and designing data protection safeguards for any processing operation. … The principle of purpose limitation is designed to establish the boundaries within which personal data collected for a given purpose may be processed and may be put to further use [21] This should, for example “prevent the use of individuals’ personal data in a way (or for further purposes) that they might find unexpected, inappropriate or otherwise objectionable”.
[22] For breach detection the purpose is both clear and set out in law. GDPR Recital 49 concerns: ensuring network and information security, i.e. the ability of a network or an information system to resist, at a given level of confidence, accidental events or unlawful or malicious actions that compromise the availability, authenticity, integrity and confidentiality of stored or transmitted personal data, and the security of the related services offered by, or accessible via, those networks and systems [23] Most of the data processed by breach detection systems will therefore serve two purposes: providing a networked service, and keeping that service secure. Both purposes are known, specified and legitimate (according to Recital 49) at the time when data are collected.
The Working Party recognises that “[p]ersonal data can be collected for more than one purpose. In some cases, these purposes, while distinct, are nevertheless related to some degree. In other cases the purposes may be unrelated”.
[24] The two cases require different safeguards to protect individuals’ interests: the following analysis suggests that properly-conducted breach detection should have no difficulty in satisfying the requirements of both.
2.1 Breach Detection as a Compatible Purpose The first option, covered by GDPR Article 6(4), is that a group of purposes may be “compatible”. The Working Party explain that this requires an assessment of “the relationship between the purposes…; the context … and reasonable expectations of data subjects…; the nature of the personal data and the impact of the further processing…; the safeguards adopted by the controller”.
[25] The close relationship between operating a service and securing it has been increasingly recognised by legislation, case law, and regulators’ guidance. Both GDPR Recital 49 [26] and Breyer [27] link breach detection and response to the provision of networked services; the Working Party’s Guidelines on Breach Notification encourage all data controllers and processors to “put in place processes to be able to detect and promptly contain a breach”.
[28] These bases in law [29] together with widespread reporting of the harm caused by on-line security incidents and regulators’ criticisms, [30] should mean data subjects very “reasonably expect” that those providing services will also do what is necessary to secure them and the data they contain.
[31] Concerning nature and impact, the kinds of data used for breach detection will normally be the same as those involved in providing the service. The Working Party note that additional processing with a negative or uncertain impact is unlikely to be compatible: [32] the purpose of breach detection in fact demands that the impact on users be positive.
Finally, security teams involved in breach detection have at least as strong an interest as their users in applying organisational and procedural safeguards to their information and processing secure.
[33] Logfiles and information derived from them are likely to contain information that would help an attacker find weaknesses in a system; [34] they can also reveal to an attacker whether or not their activities have been detected. Both undermine the defenders’ purpose. These files and processing will therefore normally be kept separate from the operation of the service and subject to additional technical and organisational controls. For example security data and systems will normally implement strong access controls and those with access to them will be under contractual obligations of confidentiality. The technical safeguards that can and should be used during breach detection and investigation are described in section 4.
Six years ago – before Recital 49 and Breyer had explicitly recognised the link between service provision and service security – the Working Party nonetheless cited as compatible purposes “preventing fraud and abuse of the financial system” [35] and a smart grid operator that “wishes to implement an intelligent system, including an analytics tool, to detect anomalies in usage patterns, which may give reasonable suspicion of fraudulent use”. In particular the latter stems from, and is in furtherance of, the initial purposes of providing energy to the customers and charging them for the energy they use. Customers could reasonably expect that their provider will take reasonable and proportionate measures to prevent fraudulent use of the energy, in the interest not only of the energy company, but also those customers that are paying their bills correctly [36] Provided appropriate safeguards are applied, such processing is considered compatible. The same should apply to the processes organisations use to detect misuse of online systems and data: this, too, is in the interests of both organisations and their customers.
Note that this would not extend to other uses of the data generated by use of computers and networks – for example to enforce policy or investigate crime, including attempts to identify attackers. These would constitute additional purposes, requiring their own assessment and safeguards. Where organisations use the same data and systems for multiple purposes, they must ensure these are kept distinct by appropriate organisational and technical safeguards, appropriate to each purpose and the risks it involves.
2.2 Breach Detection as a Separate Purpose Treating breach detection as a “compatible purpose” to the operation of an on-line service means both activities have the same legal basis (probably “necessary for contract” under Art.6(1)(b)) and the same obligations apply to both sets of processing. This may be helpful when existing service data are later discovered to have value for breach detection; [37] however users may gain additional protections if breach detection is treated as a separate purpose, necessary for a legitimate interest of the service operator, as suggested by both Recital 49 and the Breyer judgment.
Under this approach, the separate purpose must be “specified, explicit [and] legitimate” [38] and the processing must fully satisfy the requirements of the appropriate legal basis. In addition to the common requirement (under both GDPR Articles 6(1)(b) and 6(1)(f)) that it must be necessary, legitimate interest processing must satisfy the balancing test that the interest is not overridden by the data subject’s fundamental rights and freedoms. Individuals also have a right to seek a review of that balancing test against their own particular circumstances, under the Article 21 right to object.
None of these requirements should cause significant difficulties for the operator of an online service who wishes to use service data to detect breaches. As discussed above, the purpose is specified when data are first collected; both Breyer and Recital 49 indicate that it is legitimate. It can therefore be made explicit to users. The practical issue of how to inform users about processing of data that is observed, rather than provided directly by the user, is common to both the “compatible purpose” and “separate purpose” approaches. Regulators’ practice on their own websites [39] indicates that including breach detection and response in a privacy notice is an appropriate mechanism. Cormack explains how the balancing test will generally be satisfied by the existing practice of incident response and security teams.
[40] In particular, unlike examples of secondary processing considered by Balboni et al, detecting and remedying breaches is not an action whose benefit to the controller “is considered to prevail over the protection of personal data”, [41] but a shared interest that enhances that protection, actively supporting users’ rights and freedoms.
Security teams can therefore provide additional reassurance to their users, beyond what the law requires, by meeting the requirements of both approaches to purpose. Respecting purpose compatibility ensures that security activities are closely related to the operation of the service, stay within the expectations of users and are subject to appropriate safeguards. Treating security, in addition, as a separate purpose further ensures that it is always explicitly declared to service users and that their rights and freedoms, not just the security of the individual service, are taken into account.
Finally, the Article 29 Working Party mentions “surprise” as an indicator of non-compatible processing.
[42] , [43] Given the stress in legislation, case law and guidance on the importance of protecting personal data it seems likely that the Working Party – and service users – would actually be more surprised by a service provider that does not process data to protect its systems than by one that does.
3 Necessity Having concluded that breach detection satisfies the purpose requirements of the GDPR, the next question is what processing is necessary to achieve it.
3.1 When is processing “necessary”? Recital 39 of the GDPR states that “personal data should be adequate, relevant and limited to what is necessary for the purposes for which they are processed”.
[44] The Article 29 Working Party has explained this use of “necessary”, in both the Regulation and its preceding Directive, as meaning “any processing of personal data involved is the minimum amount required to fulfil its aim”, [45] noting also that “if other effective and less invasive means to achieve the same goal exist, then it would not be ‘necessary’”.
[46] This definition seems to exclude any possibility of further qualifying the word “necessary”, since any processing less than the minimum required cannot, by definition, fulfil the aim, so fails the requirement that processing be “adequate”.
It is therefore puzzling to find both Recital 49 of the GDPR and Article 5(3) of the ePrivacy Directive requiring that processing – for network and information security, and of cookies, respectively – must be “ strictly necessary” (emphasis added).
[47] , [48] The explanation appears to be that these phrases derive from a different source: the requirement in Articles 7 and 8 of the Charter of Fundamental Rights that any interference with rights must be “necessary in a democratic society”.
[49] In this context, “necessary” has been ruled by the European Court of Justice to be “not synonymous with indispensable” and “[n]or should it be interpreted too literally, as this would set too high a bar and make it unduly difficult for otherwise legitimate activities which may justifiably interfere with fundamental rights to take place”.
[50] Thus the e-Privacy Directive ’s requirement for cookies to be “strictly necessary” – which the Working Party interpret as “if cookies are disabled, the functionality will not be available” [51] – narrows the wider Charter sense of “necessary” down to that contained in data protection law.
This section will therefore interpret any qualified use of “necessary” as deriving from the Charter sense and follow the Working Party’s approach that, in data protection law, “necessary” (whether qualified or not) involves a requirement that processing must be reduced to the minimum possible that will still achieve the objective. Furthermore, if two approaches are “equally effective”, then the less intrusive should be adopted.
[52] 3.2 What processing is necessary for breach detection? Very rarely, a security breach may involve only a single event: far more often there will be several preparatory steps involved. The key to early detection is to spot these sequences of events, ideally before the critical point of infection or compromise. For example detecting an attacker scanning for vulnerabilities involves recognising the same test being run against several internal addresses; malware infections are often detected by linking a local machine’s visit to an infected website with its subsequent “call-home” connection to a machine controlled by the attacker; [53] a phishing incident will often be revealed by the same account logging in from a rapid sequence of geographically implausible places. In each case linking the individual events into the sequence that reveals them to be abnormal and needing further investigation requires them to be stored in association with a relevant identifier, such as the IP address or account name. Processing those identifiers is thus “necessary”, in the narrow data protection sense, as there is no less intrusive way to recognise the critical sequences of events.
Although random malicious traffic on the Internet is so prevalent [54] that every user is at risk of becoming a victim – hence likely to benefit directly from early detection and mitigation of breaches – in any given period some users will avoid this fate. This raises the question whether processing personal data of those fortunate individuals is also “necessary”. Not recording data for a particular machine or account obviously means that individual user will not benefit from detection and response when the worst does happen. But, since attackers commonly use their initial success to attack others within the system or organisation, any gaps in recording will also put others at risk. Finally, detecting unusual traffic depends on comparison with a normal baseline: problems will often be detected when behaviour varies from that of uncompromised computers and accounts. Thus comprehensive logging and processing of data is necessary, in the narrow data protection sense, for breach detection, analysis and response.
The legal position of breach detection is therefore different, in context as well as scale, from government powers to retain data for law enforcement purposes that were analysed, and found not “strictly necessary”, by the European Court of Justice in Digital Rights Ireland.
In that case most of the retained data related to individuals who were not “even indirectly, in a situation which is liable to give rise to criminal prosecutions”.
[55] Woods notes the subsequent case of Tele2/Watson describing this retention as “indiscriminate” because “there is no link between the data retention and the threat posed by a specific individual”; this “goes beyond what is ‘strictly necessary’”.
[56] In particular, collecting data in case individuals commit criminal acts “transform[s] [them] into potential suspects”.
[57] By contrast, when detecting security breaches, all those whose data are retained are likely to be victims; indeed, according to Eurobarometer, 42% of them already have been.
[58] This activity is not “indiscriminate” and does not transform their status. Logging and processing information to detect breaches and provide help supports the rights and freedoms of individual users, not just an “objective of general interest” such as “the fight against serious crime”.
[59] , [60] Finally, Tele2/Watson saw government data retention as an exception to the privacy protections in the ePrivacy Directive , [61] whereas “ensuring network and information security” is explicitly recognised as contributing to those protections by both the amended ePrivacy Directive and the GDPR.
[62] According to Woods, even where data retention may be necessary, “stringent safeguards to prevent abuse would be of central importance in determining whether such powers were proportionate”.
[63] Not only is breach detection compatible with such safeguards, omitting them is likely to make it significantly less effective.
4 Safeguards Both necessity and purpose limitation principles therefore consider the safeguards that can be applied to the processing as a relevant factor. As noted in the earlier purpose limitation discussion, the organisational safeguards needed to ensure the effectiveness of breach detection and investigation are strongly aligned with those required to ensure privacy is protected. Here we consider the technical safeguards that can, and should, be used.
Nearly all the identifiers used for breach detection – including IP, MAC and email addresses – have the technical characteristics of pseudonyms, defined in the GDPR as: the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.
[64] As noted in Breyer , [65] for website logs – and others that record the activity of external users – the “additional information” is not just held separately, but by an entirely different organisation: police powers are likely to be required to obtain it. Even where logs relate to users within the organisation, the additional information is normally generated by separate systems – those concerned with authentication and address allocation – from the network flows and application logs that are the main resource for breach detection.
Furthermore most breach detection can be done without the attribution step. As described above, the first stage in detection is to link several events – each associated with a pseudonymous identifier such as an IP address – into a sequence that may indicate a security breach. Analysis to determine whether a breach is the most likely cause of such an alert can also normally done using just the pseudonymised data. Only when this investigation concludes that a breach probably has occurred is it necessary to identify the individuals involved: to contact them, confirm what has happened and provide assistance. Events that do not correlate into alerts, and alerts whose investigation reveals them to have an innocent explanation, can left as unattributed pseudonyms. Breach detection can therefore be done within a framework recognised by the GDPR both as a safeguard that “can reduce the risks to the data subjects concerned and help controllers and processors to meet their data-protection obligations”, [66] and as an “appropriate technical measure” for implementing data protection principles including minimisation [67] and security.
[68] Bolognini and Bistolfi consider that in situations where the purpose may require identification of a subset of individuals, the GDPR’s approach to pseudonyms in fact provides the best protection since – unlike anonymisation, which takes data outside the scope of data protection law – pseudonymisation provides both technical safeguards and continuing regulation: it “is able to mitigate the risks of a data subject’s direct identification, guaranteeing that the data controller uses the data in compliance with norms governing data protection”.
[69] Treating security event and alert data as pseudonyms ensures that data protection law regulates both the data and processes used for breach detection and the data and processes for linking breaches to individual victims.
Where pseudonyms are used, GDPR Article 11 relaxes the normal rule that individuals must be informed in advance of processing, recognising that identifying individuals to inform them that their pseudonymised data are being processed would remove the benefit of the safeguard.
[70] The purpose of breach detection and response thus encourages organisations to do at least as much as the legal requirement, by providing general information to all users of a system that data will be processed for breach detection and response, and informing specific users who do need to be identified, immediately after that linking takes place.
Again, there is a contrast with law enforcement data retention where, according to Spina, “the fact that data are retained and subsequently used without the subscriber or registered user being informed is likely to generate in the minds of the persons concerned the feeling that their private lives are the subject of constant surveillance”.
[71] Effective breach detection and response require the user to be informed, and action to be taken, as soon as possible after the event. Not informing a user when it appears likely that they are a victim of a security breach would defeat our purpose.
Breach detection and response can, and should, therefore follow the “differentiated approach” recommended by Mantelero for data and processing minimisation.
[72] Analysis to detect problems is done using pseudonyms, affected users are identified at the last stage of response when offering them assistance. Any lessons learned can be shared to help others using either anonymised data or pseudonyms (such as remote IP addresses) that are only meaningful to the recipient organisation.
[73] This approach also contributes to the security and privacy of data, users and systems, since it minimises the risk of analysts inadvertently discovering or disclosing information that is not relevant to the investigation. If unusual activity on a network is analysed and found not to be malicious, the analyst can ignore it without ever knowing which individual users were involved.
Finally, Bolognini and Bistolfi note that using linked pseudonyms to identify and assist individual victims may well involve less, and more predictable, limitation of rights and freedoms than imposing preventive measures on a larger, anonymous, group.
[74] The specific question of whether this may constitute “profiling”, and how it can be done in accordance with the GDPR, will be considered after examining the general issues raised by the increasing use of automation in breach detection.
5 Automated Breach Detection Techniques for breach detection have been developed continuously over more than twenty years.
[75] Originally these involved manual inspection of logfiles and network flows; visualisation and investigation tools were then developed to help analysts perform these procedures.
[76] Over the same period, our use of networked computers has expanded massively in both scale and complexity, generating security data in much greater volumes than human analysts can handle. In 2014 a national research computing service generated less than 10 Gigabytes of logs a day: [77] in 2018 a single medium-sized university, over 200GB.
[78] Humans can no longer look at every event on a network or system: indeed looking at individual events is unlikely to be sufficient to reveal most security breaches. As the Article 29 Working Party notes, breaches generally appear as anomalies within the normal patterns of activity and detecting them requires correlating events occurring at different times, in different locations or, indeed, reported by entirely different systems.
[79] For example Huang, Kalbarczyk and Nicol describe a hybrid breach detection system that combines information about network flows with logs from applications and content inspection systems.
[80] Automation is therefore an essential first stage in most breach detection processes: typically software will be used to analyse the events recorded in flows and logfiles, to identify groups of events that may indicate security breaches – either because they match known patterns of unwanted activity, or because they do not match normal patterns – and to alert human analysts of the need to investigate these correlated groups.
[81] This section reviews how automation affects the earlier discussion of purpose, necessity and safeguards; the next considers it in the light of the GDPR’s provisions on profiling and automated decision making.
5.1 Automation and Purpose Introducing automation does not change the purpose of breach detection – “ensuring network and information security” [82] – it merely changes the means by which part of that purpose is achieved. In fact, automation should guarantee adherence to that purpose, since event reduction and correlation programs can be written to specifically target groups of events likely to indicate security breaches. Unlike human analysts, their focus is hard-coded and cannot wander onto other implications of the data they may see.
Automation may even allow the same breach detection purpose to be achieved through fundamentally different, and less intrusive, techniques: not just a faster version of what was previously done by a human. Zeuch et al describe how an algorithm needed to examine fewer log fields than a human to detect attacks, [83] Anderson et al suggest how malware infections can be detected from the encryption parameters used, rather than having to decrypt all traffic.
[84] Even where programs implement the same method as humans, they can be written to ensure compliance with requirements such as the legitimate interests balancing test, discussed in section 2.2. For example, in accordance with privacy by design principles, [85] automated Denial of Service detection processes benefit from the structure of their input data: inspecting low-risk headers first and passing most legitimate traffic based on these few fields, then performing more detailed inspection of higher-risk data only for flows whose headers raise concern.
[86] Parts of messages that contain insufficient information about attacks to justify examining them can be ignored. This ensures that actions involving a slightly greater (but still low) risk to individuals’ privacy are only taken where this is justified by a greater risk of individuals being harmed a security breach.
5.2 Automation as a Safeguard Considering automation as the first stage in Mantelero’s differentiated approach to pseudonyms suggests that it is likely to act as a safeguard of individuals’ rights. One of the main aims of automation is to eliminate the “noise” (from a breach detection perspective) represented by the majority of legitimate and non-threatening activity, allowing human analysts to concentrate on threatening and unusual patterns. Events with a harmless explanation are not only protected by pseudonymisation, as discussed in the previous section: with an automated event reduction system they are unlikely to be seen by human analysts at all. Inspection by computer – for example when checking emails for malware – has been treated by the Article 29 Working Party as less privacy intrusive than the same check being done by a human.
[87] Automation of such tasks should be considered as a positive safeguard.
As well as making the breach detection process more privacy-respecting, automation may make it faster and more effective. With early detection of breaches recognised as an important way to reduce their impact, [88] approaches such as those described by Zeuch et al – which identified 784 security incidents in a dataset where traditional techniques found only eight [89] – may make a major contribution to data protection.
Kuner et al suggest that automation may also act as a safeguard against discrimination and bias: “human decision-making is often influenced by bias, both conscious and unconscious, and even by metabolism … intriguing possibility that it may in future be feasible to use an algorithmic process to demonstrate the lawfulness, fairness and transparency of a decision made by either a human or a machine to a greater extent than is possible via any human review of the decision in question”.
[90] Tired, hungry incident responders and users of their systems should welcome the consistency and respectfulness of automated decisions.
5.3 Automation and Necessity? This analysis suggests that, for types of breach where it is known to be effective, automation can reduce both the data protection risks from security breaches – because they should be detected and resolved more quickly – and the risks arising out of the detection process itself – because there is less human inspection of users’ activities and safeguards can be built in. In particular, the human intrusion into legitimate online activities will be much less, as these will be classified as non-malicious by machines, rather than human eyes. For more than a decade, automation has been recognised as a way to defend against malware and spam “without prejudice to confidentiality of the communications”: [91] it may well be appropriate to view more modern automated breach detection techniques in the same light.
Indeed, under the “necessity” principle that processing should choose the least intrusive among a number of different ways of achieving its purpose, [92] it might be argued that the law should positively encourage the greater use of automation, where it can replace human inspection. This is likely, however, to require consideration of GDPR Article 22, which applies specifically to “Automated individual decision-making, including profiling”. The next section considers how this might affect the use of automation in detecting breaches.
6 Profiling and Automated Decision Making “Profiling” is defined in Article 4(4) of the GDPR as: any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements [93] Authors vary in their assessment of the Regulation’s attitude to profiling. De Hert and Papakostantinou consider the Regulation, like its predecessor Directive, treats it as a potentially beneficial activity whose risks can be mitigated by regulatory controls. Therefore “the new rules do allow profiling operations to take place even based on sensitive data under the general, but not always applicable, condition that special measures for the protection of individuals have also been implemented”.
[94] Rubinstein agrees that automating decision-making can “substantially improve its accuracy and scope”, [95] but is less optimistic about the law’s power to ensure that improvement is used to benefit individuals.
This section considers whether breach detection will involve either “profiling” or “automated decision making” within the GDPR definitions and, if so, how it can comply with the law’s requirements.
6.1 Profiling Breach detection will involve processing information about the use of networks and systems, to identify attacks and those who have been affected by them. It could be argued that this falls outside the Regulation’s definition of Profiling, as the purpose is to identify insecure machines and accounts, not to “evaluate … personal aspects” of their users. However such hair-splitting should be unnecessary, as the “special measures for the protection of individuals” set out in Recital 71 [96] are, in any case, things that strongly support the aims of automated breach detection systems and their operators. Those developing such systems already strive to identify “appropriate mathematical or statistical procedures”. False positives (alerts when there is no security breach) and false negatives (failure to detect an actual breach) both undermine the effectiveness of systems and waste operators’ and users’ time, so developers and operators are keen to “ensure … that factors which result in inaccuracies in personal data are corrected and the risk of errors minimised”. Much of the information processed would help an attacker – not least by informing him whether his activities have been detected and recognised – so there is a strong incentive to use both technical and organisational measures to keep it secure. In 2013 the Article 29 Working Party recommended pseudonyms (discussed in section 4) as a specific safeguard for profiling.
[97] Discriminatory algorithms – where protected characteristics of the attacker or victim affect the likelihood of an attack being detected – would constitute false positives, false negatives, or both, so should quickly be rejected.
If profiling involves “systematic and extensive evaluation of personal aspects” then Article 35(3)(a) requires a data protection impact assessment (DPIA).
[98] Since breach detection systems are not intended to “evaluate personal aspects” at all, it seems unlikely that they would reach this threshold. However, as their legal basis is the legitimate interests of the organisation, they will in any case be subject to the data minimisation and rights balancing tests required by Article 6(1)(f). Before committing to a large-scale, expensive and resource-intensive deployment, organisations are likely to perform a detailed assessment of the shared risks and benefits for both the organisation and its users, in terms very similar to a formal DPIA.
Whether or not breach detection systems involve profiling in the GDPR sense, they will therefore benefit greatly from being developed and used in accordance with the Regulation’s wishes. In fact the Regulation does not impose any requirements merely because an activity falls within the definition of “profiling”. Instead GDPR Article 22 places requirements on “automated individual decision-making”, which is considered in the next section.
6.2 Automated Decision Making In most cases automated breach detection will be used to raise alerts when sequences of events require further investigation by human analysts. Any subsequent action will normally be based on the conclusions reached by those analysts, taking into account their previous experience and the context surrounding the particular sequence of events. For example an analyst should quickly identify when a spike in network traffic is due to a new release of a popular operating system, rather than an attack.
[99] According to the Article 29 Working Party’s Guidelines on Automated Individual Decision-Making and Profiling, this involvement of “someone who has the authority and competence to change the decision … consider[ing] all the relevant data” will take the activity outside the scope of Article 22.
[100] In a few situations, however, the threat to a system, data or users will be sufficiently clear and urgent that operators will choose to have an alert trigger an immediate automated response. Such responses are commonly used to block senders of virus-infected or spam emails; [101] in some countries to quarantine ISPs’ customers whose systems appear to have been compromised; [102] and increasingly to re-direct distributed denial of service (DDoS) attacks away from their targets. These systems – sometimes referred to as Incident Prevention Systems (IPS) – make decisions without prior human review, so might constitute “automated individual decision-making”, regulated by GDPR Article 22.
Article 22(1) states that: The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her Applying this to automated incident prevention therefore raises three questions: what constitutes a solely automated decision? What right may be created? Does the decision sufficiently affect an individual to create that right? Kuner et al note that Article 22(1) is an expansion – from profiling to any kind of automated decision-making – of Article 15(1) of the 1995 Data Protection Directive.
[103] Analysing that Article, Bygrave concluded that “a response on the part of computer software … to particular constellations of data and data input” probably is a decision.
[104] He also considers that a decision is solely automated “if a decision … originates from an automated data-processing operation the result of which is not actively assessed by [any person] before being formalised as a decision”.
[105] Unsurprisingly, automated incident prevention does therefore involve solely-automated decisions.
Bygrave considered the Article 15 right “one of the most difficult to construe properly” in the Directive.
[106] He noted that it “does not take the form of a direct prohibition on a particular type of decision making”: [107] a Member State could comply either by creating an individual right of human review after an automated decision was made, or by proactively banning such decisions.
[108] A right to review is stronger – since it concerns the decision reached, not just the risks involved in the processing that led up to it – than the right to object that already exists (under Art.14 DPD/Art.21 GDPR) whenever processing is based on legitimate interests.
[109] Rubinstein interprets Article 15 as a right to “resist automated decisions and seek human intervention”; [110] in 2013 the Article 29 Working Party also appear to have intended this interpretation of Article 22 as an individual, retrospective right: Data subjects should also have the right to access, to modify or to delete the profile information attributed to them and to refuse any measure or decision based on it or have any measure or decision reconsidered with the safeguard of human intervention.
[111] However five years later the Working Party concluded that the Article 22 “right” was on the contrary “[a] general prohibition on this type of processing […] to reflect the potential risks to individuals’ rights and freedoms”.
[112] Whether automated incident prevention continues to be permitted therefore depends, since none of the Art.22(2) exemptions applies, on whether it “similarly significantly affects” data subjects.
An automated action that prevents an individual becoming a victim of crime might seem to significantly affect them however, from the context, Bygrave considers that Article 15 in fact requires a decision that is “significantly adverse in its consequences” (emphasis added), [113] suggesting that “it is extremely doubtful that Art. 15(1) may apply when a decision has purely beneficial effects for the data subject.” [114] In their 2013 analysis the Article 29 Working Party expected the future Regulation to provide “a reasonable degree of discretion to assess the actual effects – positive and negative”; [115] in 2018 that “only serious impactful events will be covered” by Article 22.
[116] Although not explicit, this seems to confirm that only adverse effects and impacts are of concern (the requirement for an “adverse legal effect” is made explicit in s.49(2)(a) of the UK’s Data Protection Act 2018 ). Since the effect of automated incident prevention should be positive: to remove, or at least reduce, the impact of the crime on the victim, this “seriously impactful” test should ensure it does not fall within the Working Party’s Article 22 ban.
The attacker whose aims – such as the installation of profitable ransomware – are thwarted might wish to argue that this does constitute a “significantly adverse” outcome for them. However automated blocking of such an attack will not create “legal effects” of the kind discussed by the Working Party (cancellation of a contract, denial of a social benefit granted by law, refusal of entry to a country).
[117] Any subsequent process that did lead to legal effects such as fines or imprisonment would be the result of considerable human decision-making within the prosecution system, so would fall outside Article 22.
In the past the Article 29 Working Party has strongly supported automated scanning and blocking of virus-infected e-mails.
[118] Automated systems are now used to protect against ransomware and Denial of Service attacks that can shut down even global service providers [119] and user organisations.
[120] This new interpretation of Article 22 as a prohibition makes it essential that its threshold is set well above the level of actions required to defend users, networks and systems. In particular, Regulators must be cautious in interpreting “similarly significant” non-legal effects to ensure that automatically depriving criminals of financial opportunities does not fall within the ban.
6.3 Automation in practice By interpreting Article 22 as a prohibition, and therefore having to apply a high threshold, the Working Party has removed lower-impact automated decisions from both the additional information provisions in Article 15(1)(h) and the safeguards in Article 22(3). Even if this means that organisations are not legally required to operate their automated defences in accordance with these Articles, it may well benefit their purpose to do so.
Users of online services should be reassured to know that their providers are using automated technologies to detect activity that is abnormal or has the characteristics of known attacks and, if appropriate, to block it. As discussed in the previous chapter, automated alerts reduce the quantity of personal data that needs to be inspected by human incident responders, thus providing greater privacy for legitimate use. The Working Party has recognised that having a program, rather than a human, check for malicious content is more privacy-protecting; [121] it should also be faster and more effective. A public notice of the presence of automated defences might even discourage some attackers who conclude that the benefits of attacking that organisation’s systems and users are not worth the risk. Such transparency should not, of course, go as far as telling an attacker how to circumvent the defences and evade detection, but the level of explanation proposed by the Article 29 Working Party in 2018 should not create these risks.
[122] Even for high-impact decisions, the law does not appear to require data controllers to notify data subjects when an automated rule has been triggered.
[123] The purpose of breach detection will, nonetheless, often encourage operators to do so. Where a user has been placed in quarantine, the security team will want to assist them in removing the malicious software or other cause. Blocking of a DDoS attack is likely to be of interest to an organisation’s managers and IT staff, but not to the majority of users who benefit from the silent protection. The volume of automatically-blocked e-mails (48% of all messages in 2018, according to Symantec [124] ) is likely to mean recipients will not want to be interrupted every this happens, but systems will normally offer the option to periodically review such messages and provide feedback if algorithms are mis-classifying them.
These opportunities to review and tune algorithms based on user feedback again mean that security teams are likely to want to do more than the law requires. Recital 71 applies only to high-impact decisions and suggests only that “the controller should … ensure, in particular, that factors which result in inaccuracies in personal data are corrected and the risk of errors minimised”.
[125] However incident responders whose algorithms have failed to accurately detect a breach – even if using entirely accurate personal data and with minimal impact, this time – will have a strong incentive to improve them.
[126] It therefore appears that even automated incident prevention can be done in compliance with the GDPR. However there is sufficient concern about large-scale automated data processing that mere legal compliance may not be sufficient to ensure public confidence and support. The final section considers how breach detection, including automation, can achieve that.
7 Beyond Compliance: Avoiding “Creepiness” Concerns about large-scale processing of personal data are widespread, crossing even the boundaries between traditionally different privacy cultures. In the USA, Leonard finds a “perception that business data analytics principally involved hidden and deliberatively secretive identification and targeting of individual consumers for ‘one to one’ marketing”: [127] in Europe the concern is “the personal dignity and integrity of individuals compromised by decisions made by automated processes, when contrasted with decisions made by humans having regard to individual circumstances and constrained by human rights laws and also, perhaps, human empathy?”.
[128] Leonard notes, however, the “highly contextual way in which ‘creepiness’ concerns arise”: [129] machines are not always bad, humans not always good. As Nissenbaum [130] would predict, in some contexts automation is perceived as a benefit, in others a threat. Doubts are widespread whether compliance – even with strict European privacy laws – will be sufficient to avoid these concerns. Rubinstein worries that the GDPR, while recognising “issues associated with targeting, profiling, and consumer mistrust, relies too heavily on the discredited informed choice model, and therefore fails to fully engage with the impending Big Data tsunami”.
[131] De Hert and Papakonstantinou note that drafting of this law started before big data began to “challenge the limits of legislation”.
[132] This final section therefore summarises, first, how breach detection contributes to, rather than conflicting with, expectations of online service use; then how, in each of the areas discussed, the purpose of breach detection is best served by doing more than the law requires.
7.1 Contributing to the Online Context Nissenbaum suggests that, whether or not a use of data complies with the applicable law, individuals are likely to perceive it as breaching privacy if it conflicts with their expectations for the context in which it was provided.
[133] This goes beyond the Article 29 Working Party’s use of surprise as an indicator of incompatible processing, [134] , [135] since a fully disclosed secondary use may still conflict with contextual expectations. However, security – including breach detection and remediation – should be a basic expectation whenever we go online. Legislators and regulators are making this expectation more explicit; the media frequently remind us of the risks posed by services whose security measures are insufficient.
[136] The fact that personal data are processed for these purposes should neither surprise users, nor breach their contextual expectations.
In contrast to the systems that concern Leonard, breach detection is the opposite of a secret transfer of value from individual to provider. As discussed in section 2.1, its primary purpose, which requires it to be done openly, is to protect those whose data may be at risk. The benefits that accrue to service operators are a secondary result of achieving that primary purpose: sales are not lost, reputations are protected (or even enhanced), fines and compensation do not need to be paid. Here the interests of individuals and providers are strongly aligned, not conflicting. As discussed in section 4, Individuals may be ‘targeted’, in the sense of receiving personalised attention, but this will only happen when they appear to have been victims of a security breach and need help. Not informing victims would defeat the purpose of the processing and leave both individual and provider exposed to continuing harm.
As discussed in section 6.2, only a few breach detection processes will involve fully-automated decision-making. In most cases, humans will check the results and recommendations of automated systems against their own experience and knowledge of context – precisely to ensure that a breach, rather than an unexpected but legitimate activity, is the most likely cause – before taking any action. Even where defensive actions are fully automated, a rapid human feedback process is essential to achieve the desired goal of permitting legitimate traffic while blocking hostile activity. Automation should, in fact, free up human resource to make these context-dependent decisions: in breach detection, machines and humans have highly complementary roles.
7.2 More than Compliance Recital 49 of the GDPR sets a high standard for network and information security activities. Most of the information processed will be subject to data protection law; in addition to the normal requirements of necessity, proportionality, fairness, etc., as a legitimate interest processing must, unlike any other legal basis, be explicitly tested against the risk to individuals’ rights and freedoms. For breach detection, these are not just legal requirements: they are essential to delivering the objective of improving the security of users, data systems and networks. Indeed, as this paper has shown, that objective often encourages security teams to implement more safeguards than the law requires. They should not be worried by – or seek to avoid – falling within regulation’s scope.
Section 2 discussed purpose limitation. Unlike many large-scale data processing activities, breach detection is focussed, from the start, on a single, well-defined purpose. Furthermore, that purpose now has strong support from regulators, and publicising it can provide direct benefits by discouraging casual attackers. Whereas the law establishes two types of secondary purpose – those that are compatible and those that are separately declared – breach detection will be most effective if done in accordance with both sets of obligations. Its activities should be closely linked to the continuing, secure, delivery of online services and they should always be designed to enhance, rather than put at risk, the rights and freedoms of individuals using those services.
Section 3 examined necessity: the requirement that processing be done in the least intrusive way that will achieve the objective. Although breach detection does require processing of large quantities of information about the use of networked services, this reflects the universal risk of becoming a victim of an on-line attack and the relevance of that information for detection and mitigation.
Section 4 examined technical and procedural safeguards. Breach detection can largely be done using pseudonyms, with individuals only being identified when there is a high likelihood that they have become a victim and need help. This should be done as soon as possible: keeping information long after a breach is discovered is directly contrary to the purpose of the processing.
Sections 5 and 6 considered the use of automation in breach detection: how this may affect the issues raised in purpose and necessity, and how it may be affected by the new Article 22 rules on profiling and automated decision-making. Section 5 identified multiple benefits of automation: allowing purpose and safeguards to be written into code, rather than just policy and practice; reducing the need for human inspection of legitimate activities, and allowing harmful ones to be identified and addressed more quickly. Section 6 concluded that even fully automated responses to attacks are unlikely to involve the “serious impactful effects” [137] to which Article 22 applies. Nonetheless, the requirements on processing that does exceed that threshold are beneficial to breach detection and response, so security teams using automation should follow them anyway: accuracy of algorithms is essential and data protection impact assessments likely to be beneficial for large-scale deployments; informing those subject to decisions is a key part of helping them recover from being victimised. A notice that automated breach detection is being used to improve security should reassure legitimate users and may discourage attackers. Security teams will be delighted if the latter wish to exercise their right to object to this processing! Finally, this section has shown that there is, in fact, no paradox. Processing personal data and protecting it from security breaches should be inextricably linked in expectation, law and practice. Where protecting personal data requires further processing, not only can this be done in accordance with the law, the protection is likely to be ineffective if it is not. In many cases, effective breach detection requires security teams to take even more care than the law requires. Well-conducted breach detection activities should be a source of reassurance to data subjects, data controllers and regulators alike.
[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (hereinafter ‘GDPR’).
[2] Council of Europe, Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, Strasbourg, 28 Jan 1981.
[3] Federal Trade Commission, Fair Information Practice Principles (25 June 2007), available at https://web.archive.org/web/20090331134113/http://www.ftc.gov/reports/privacy3/fairinfo.shtm (accessed 19 August 2019).
[4] European Commission, “State of the Union 2017 – Cybersecurity: Commission scales up EU’s response to cyber-attacks” (Brussels, 19 September 2017), available at http://europa.eu/rapid/press-release_IP-17-3193_en.htm (accessed 19 August 2019).
[5] Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act), Recital 25.
[6] GDPR, supra n. 1, Recital 85.
[7] European Data Protection Board, “Personal Data Breach Notifications” (25 May 2018), available at https://edpb.europa.eu/node/67 (accessed 19 August 2019).
[8] Article 29 Working Party, “Guidelines on Personal data breach notification under Regulation 2016/679” (18/EN WP250rev.01) (hereinafter “Breach Notification”), p. 13.
[9] Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union (NIS Directive).
[10] Regulation (EU) 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC (EIDAS Regulation).
[11] Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications), as amended by Article 2(4)(c) of Directive 2009/136/EC of the European Parliament and of the Council of 25 November 2009 (ePrivacy Directive).
[12] Article 29 Working Party, “Breach Notification”, supra n. 8, p. 13.
[13] Patrick Breyer v Bundesrepublik Deutschland , Case C-582/14 [2016] ECLI:EU:C:2016:779 (hereinafter Breyer ), para. 64.
[14] ENISA, “Proactive detection of network security incidents” (2012), available at https://www.enisa.europa.eu/publications/proactive-detection-report/ (accessed 19 August 2019), p. 105.
[15] Forum of Incident Response and Security Teams, “Establishing a CSIRT” (version 1.2, November 2017), available at https://www.first.org/resources/guides/Establishing-CSIRT-v1.2.pdf (accessed 19 August 2019), p. 28.
[16] Moira West-Brown et al., “Handbook for Computer Security Incident Response Teams” (Software Engineering Institute, April 2003), available at https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=6305 (accessed 19 August 2019), p. 84.
[17] Article 29 Working Party, “Opinion 4/2007 on the concept of personal data” 01248/07/EN WP 136 (hereinafter ‘personal data’), pp.15-16.
[18] Breyer , supra n. 13, para 64.
[19] GDPR, supra n. 1, Article 5(1)(b).
[20] GDPR, supra n. 1, Article 5(1)(c).
[21] Article 29 Working Party, “Opinion 03/2013 on purpose limitation” 00569/13/EN WP 203 (hereinafter “Purpose Limitation”), p.4.
[22] Ibid.
, p. 11.
[23] GDPR, supra n. 1, Recital 49.
[24] Article 29 Working Party, “Purpose Limitation”, supra n. 21, p. 16.
[25] Ibid.
, p. 3.
[26] GDPR, supra n. 1, Recital 49.
[27] Breyer , supra n. 13, para. 64.
[28] Article 29 Working Party, “Breach Notification”, supra n. 8, p. 6.
[29] Article 29 Working Party, “Purpose Limitation”, supra n. 21, p. 25.
[30] For example BBC, “British Airways faces record £183m fine for data breach” (8 July 2019), available at https://www.bbc.co.uk/news/business-48905907 (accessed 19 August 2019); BBC “UK watchdog plans to fine Marriott £99m” (9 July 2019), available at https://www.bbc.co.uk/news/technology-48928163 (accessed 19 August 2019).
[31] Article 29 Working Party, “Purpose Limitation”, supra n. 21, p. 13.
[32] Ibid.
, p. 26.
[33] Andrew Cormack, “Incident Response: Protecting Individual Rights Under the General Data Protection Regulation”, (2016) 13(3) SCRIPTed 258-282, p. 276.
[34] Bernie Lantz, Rob Hall, and Jason Couraud, “Locking Down Log Files: Enhancing Network Security by Protecting Log Files”, (2006) VII(2) Issues in Information Systems 43-47, p. 44.
[35] Article 29 Working Party, “Purpose Limitation”, supra n. 21, p. 53.
[36] Ibid.
, pp 69-70.
[37] Ibid.
, p. 21.
[38] Ibid.
, p. 12.
[39] Information Commissioner’s Office, “Visitors to Our Website” (ICO, undated), available at https://ico.org.uk/global/privacy-notice/visitors-to-our-website/#sec (accessed 23 August 2019).
[40] Cormack, supra n. 33, p. 274.
[41] Paolo Balboni et al., “Legitimate Interest of the Data Controller. New Data Protection Paradigm: Legitimacy Grounded on Appropriate Protection” (2013) 3(4) International Data Privacy Law 244-261, p. 247.
[42] Article 29 Working Party, “Purpose Limitation”, supra n. 21, p. 24.
[43] Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679” 17/EN WP260 rev.01 (hereinafter “Transparency”), p. 24.
[44] GDPR, supra n. 1, Recital 39.
[45] Article 29 Working Party, “Opinion 01/2014 on the application of necessity and proportionality concepts and data protection within the law enforcement sector” 536/14/EN WP 211 (hereinafter “Necessity and Proportionality”), p. 18.
[46] Article 29 Working Party, “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679” 17/EN WP251rev.01 (hereinafter “Profiling Guidelines”), p. 17.
[47] GDPR, supra n. 1, Recital 49.
[48] ePrivacy Directive, supra n. 11, Article 5(3).
[49] Charter of Fundamental Rights of the European Union 2012/C 326/02, Articles 7 and 8.
[50] Article 29 Working Party, “Necessity and Proportionality”, supra n. 45, p. 6.
[51] Article 29 Working Party, “Opinion 04/2012 on Cookie Consent Exemption” 00879/12/EN WP 194, p. 4.
[52] Article 29 Working Party, “Necessity and Proportionality”, supra n. 45, p. 21.
[53] Guofei Gu et al., “BotHunter: Detecting Malware Infection Through IDS-Driven Dialog Correlation” (2007) 16 th Usenix Security Symposium 167-182.
[54] SANS, “Survival Time” (Internet Storm Centre, undated), available at https://isc.sans.edu/survivaltime.html (accessed 19 August 2019) suggests every Internet-connected computer receives hostile traffic several times a day.
[55] Digital Rights Ireland Ltd v Minister for Communications, Marine and Natural Resources, Minister for Justice, Equality and Law Reform, Commissioner of the Garda Síochána, Ireland, The Attorney General , Case C-293/12 [2014] ECLI:EU:C:2014:238 (hereinafter Digital Rights Ireland ), para. 58.
[56] Lorna Woods, “Automated Number Plate Recognition: Data Retention and the Protection of Privacy in Public Places” (2017) 2(1) Journal of Information Rights, Policy and Practice 1-21, p. 18.
[57] Ibid.
, p. 12.
[58] Eurobarometer, “Special Report 464a: Europeans’ Attitudes Toward Cyber Security” (European Commission, 2017), p. 66, available at http://ec.europa.eu/commfrontoffice/publicopinion/index.cfm/ResultDoc/download/DocumentKy/79734 (accessed 23 August 2019).
[59] Digital Rights Ireland , supra n. 55, para. 44.
[60] Ibid.
, para. 41.
[61] Tele2 Sverige AB v Post-och telestyrelsen and Secretary of State for the Home Department v Tom Watson, Peter Brice, Geoffrey Lewis , Cases C-203/15 and C-698/15 [2016] ECLI:EU:C:2016:970 (hereinafter Tele2), para. 88.
[62] Directive 2009/136/EC of the European Parliament and of the Council of 25 November 2009 amending Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services, Directive 2002/58/EC concerning the processing of personal data and the protection of privacy in the electronic communications sector and Regulation (EC) No 2006/2004 on cooperation between national authorities responsible for the enforcement of consumer protection laws, Recital 53.
[63] Woods, supra n. 56, p. 18.
[64] GDPR, supra n. 1, Article 4(5).
[65] Breyer , supra n. 13, paras 47-8.
[66] GDPR, supra n. 1, Recital 28.
[67] GDPR, supra n. 1, Article 25(1).
[68] GDPR, supra n. 1, Article 32(1)(a).
[69] Luca Bolognini and Camilla Bistolfi, “Pseudonymisation and Impacts of Big (personal/anonymous) Data Processing in the Transition from the Directive 95/46/EC to the new EU General Data Protection Regulation” (2017) 33(2) Computer Law and Security Review 171-181, p. 178.
[70] GDPR, supra n. 1, Article 11.
[71] Alessandro Spina, “Risk Regulation of Big Data: Has the Time Arrived for a Paradigm Shift in EU Data Protection Law?” (2014) 5(2) European Journal of Risk Regulation 248-252, p. 251.
[72] Alessandro Mantelero, “Data Protection, e-ticketing, and Intelligent Systems for Public Transport” (2015) 5(4) International Data Privacy Law 309-320, p. 312.
[73] Cormack, supra n. 33, p. 281.
[74] Bolognini and Bistolfi, supra n. 69, p. 180.
[75] West-Brown et al., supra n. 16.
[76] Peter Haag, “Watch Your Flows with NfSen and NFDUMP” (2005), available at https://meetings.ripe.net/ripe-50/presentations/ripe50-plenary-tue-nfsen-nfdump.pdf (accessed 23 August 2019).
[77] Jingwei Huang, Zbigniew Kalbarczyk, and David M Nicol, “Knowledge Discovery from Big Data for Intrusion Detection Using LDA” (2014) IEEE International Congress on Big Data 760-761, p. 760.
[78] Arthur Clune, University of York, personal communication. 17 th August 2018.
[79] Article 29 Working Party, “Breach Notification”, supra n. 8, p. 13.
[80] Huang et al, supra n. 77, p. 760.
[81] Tyler Wall, “SIEM Implementation Strategies” (Tripwire, March 13 2018), available at https://www.tripwire.com/state-of-security/incident-detection/log-management-siem/siem-implementation-strategies/ (accessed 23 August 2019).
[82] GDPR, supra n. 1, Recital 49.
[83] Richard Zeuch, Taghi Khoshgoftaar, and Randall Wald, “Intrusion Detection and Big Heterogenous Data: A Survey” (2015) 2:3 Journal of Big Data 1-41, p. 34.
[84] Blake Anderson, Subharthi Paul, and David McGrew, “Deciphering Malware’s use of TLS (without Decryption)” (2016) 14(3) Journal of Computer Virology and Hacking Techniques 195-211.
[85] GDPR, supra n. 1, Article 25.
[86] E.g. F5 Networks, “The F5 DDoS Protection Reference Architecture” (19 December 2014), available at https://f5.com/resources/white-papers/the-f5-ddos-protection-reference-architecture (accessed 23 August 2019).
[87] Article 29 Working Party, “Working Document: Privacy on the Internet – An integrated EU Approach to On-line Data Protection” 5063/00/EN/FINAL WP37, p. 40.
[88] GDPR, supra n. 1, Recital 85.
[89] Zeuch et al, supra n. 83, p. 2.
[90] Christopher Kuner et al, “The Challenge of Big Data for Data Protection” (2012) 2(2) International Data Privacy Law 47-49, p. 47.
[91] Article 29 Working Party, “Opinion 2/2006 on privacy issues related to the provision of email screening services” 00451/06/EN WP 118 (hereinafter “Email Screening Services”), p.40.
[92] Article 29 Working Party, “Profiling Guidelines”, supra n. 46, p. 17.
[93] GDPR, supra n. 1, Article 4(4).
[94] Paul De Hert and Vagelis Papakostantinou, “The New General Data Protection Regulation: Still a Sound System for the Protection of Individuals?” (2016) 32 Computer Law & Security Review 179-194, p. 189.
[95] Ira Rubinstein, “Big Data: The end of privacy or a new beginning?” (2013) 3(2) International Data Privacy Law 74-87, pp. 77-78.
[96] GDPR, supra n. 1, Recital 71.
[97] Article 29 Working Party, “Advice paper on essential elements of a definition and a provision on profiling within the EU General Data Protection Regulation” (13 May 2013) (hereinafter “Profiling Advice”), p. 4.
[98] GDPR, supra n. 1, Article 35(3)(a).
[99] Alex Hern, “iOS7 update doubles UK and German net traffic and may have reached 100m” (The Guardian, 19 September 2013), available at https://www.theguardian.com/technology/2013/sep/19/ios-7-update-traffic-100-million (accessed 23 August 2019).
[100] Article 29 Working Party, “Profiling Guidelines”, supra n. 46, p. 21.
[101] Article 29 Working Party, “Email Screening Services”, supra n. 91, p. 2.
[102] Jeroen Pijpker and Harald Vranken, “The Role of Internet Service Providers in Botnet Mitigation” (2016) Proceedings of the 23 rd European Intelligence and Security Informatics Conference 24-31, p. 26.
[103] Christopher Kuner et al., “Machine Learning with Personal Data: is Data Protection Law Smart Enough to Meet the Challenge?” (2017) 7(1) International Data Privacy Law 1-2, p. 1.
[104] Lee Bygrave, “Minding the Machine: Article 15 of the EC Data Protection Directive and Automated Profiling” (2001) 17(1) Computer Law and Security Report 17-24, p. 19.
[105] Ibid.
, p. 20.
[106] Ibid.
, p. 17.
[107] Ibid.
[108] Ibid.
, p. 18.
[109] Ibid.
[110] Rubinstein, supra n. 95, p. 79.
[111] Article 29 Working Party, “Profiling Advice”, supra n. 97, p. 3.
[112] Article 29 Working Party, “Profiling Guidelines”, supra n. 46, p. 9.
[113] Bygrave, supra n. 104, p. 20.
[114] Ibid.
, p. 19.
[115] Article 29 Working Party, “Profiling Advice”, supra n. 97, p. 4.
[116] Article 29 Working Party, “Profiling Guidelines”, supra n. 46, p. 21.
[117] Ibid.
[118] Article 29 Working Party, “Email Screening Services”, supra n. 91, p. 6.
[119] Nicky Woolf, “DDoS attack that disrupted internet was largest of its kind in history, experts say” (The Guardian, 26 October 2016), available at https://www.theguardian.com/technology/2016/oct/26/ddos-attack-dyn-mirai-botnet (accessed 19 August 2019).
[120] Alex Hern and Samuel Gibbs, “What is WannaCry ransomware and why is it attacking global computers?” (The Guardian, 12 May 2017), available at https://www.theguardian.com/technology/2017/may/12/nhs-ransomware-cyber-attack-what-is-wanacrypt0r-20 (accessed 19 August 2019).
[121] Article 29 Working Party, “Email Screening Services”, supra n. 91, p. 40.
[122] Article 29 Working Party, “Profiling Guidelines”, supra n. 46, p. 25.
[123] Ibid.
[124] Symantec, “Internet Security Threat Report Vol.24” (Symantec, 2019), p. 1, available at https://www.symantec.com/security-center/threat-report (accessed 12 August 2019).
[125] GDPR, supra n. 1, Recital 71.
[126] Wall, supra n. 81.
[127] Peter Leonard, “Customer Data Analytics: Privacy Settings for ‘Big Data’ Businesses” (2014) 4(1) International Data Privacy Law 53-68, p. 54.
[128] Ibid.
, p. 55.
[129] Ibid.
, p. 54.
[130] Helen Nissenbaum, Privacy in Context (Stanford: Stanford University Press, 2010), p. 195.
[131] Rubinstein, supra n. 95, p. 74.
[132] De Hert and Papakonstantinou, supra n. 94, p. 180.
[133] Nissenbaum, supra n. 130, p. 140.
[134] Article 29 Working Party, “Purpose Limitation”, supra n. 21, p. 24.
[135] Article 29 Working Party, “Transparency”, supra n. 43, p. 24.
[136] Rory Cellan-Jones, “Dixons Carphone Admits Huge Databreach” (BBC News, 13 June 2018) available at https://www.bbc.co.uk/news/business-44465331 (accessed 19 August 2019).
[137] Article 29 Working Party, “Profiling Guidelines”, supra n. 46, p. 21.
Processing Data to Protect Data: Resolving the Breach Detection Paradox August 6, 2020 No Comments ← Editorial introduction Biomedical Data Identifiability in Canada and the European Union: From Risk Qualification to Risk Quantification? → Leave a Reply Cancel reply Your email address will not be published.
Required fields are marked * Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam.
Learn how your comment data is processed.
Home About SCRIPTed Submission Guidelines Blog Books for Review Sitemap Contact us Recent blog posts Timed influence: The future of Modern (Family) life and the law September 10, 2021 Predicting Innovation: Why Facebook/WhatsApp Merger Flunked April 1, 2021 Georgia vs. Public.Resource.org: The Morning After June 15, 2020 Online harms and Caroline’s Law – what’s the direction for the law reform? April 13, 2020 SCRIPTed is turning 15! January 14, 2019 Issues About SCRIPTed Submission Guidelines Blog Sitemap Contact us {{FOOTNOTENUM}}.
" |
182 | 2,020 | "Between a rock and a hard place: owners of smart speakers and joint control – SCRIPTed" | "https://script-ed.org/article/between-a-rock-and-a-hard-place-owners-of-smart-speakers-and-joint-control" | "A Journal of Law, Technology & Society https://script-ed.org?p=3884 SCRIPTed A Journal of Law, Technology & Society Menu About SCRIPTed Editorial Board Submission Guidelines Authors Artists Book Reviewers Blog posts Copyright policy Books for Review Issues Current Issue All issues Blog Contact us All issues > Volume 17 > Issue 2 > Between a rock and a hard place: owners of smart speakers and joint control Volume 17 , Issue 2 , August 2020 Between a rock and a hard place: owners of smart speakers and joint control Silvia De Conca* Download PDF © 2020 Silvia De Conca Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Abstract The paper analyses to what extent the owners of smart speakers, such as Amazon Echo and Google Home, can be considered joint controllers, and what are the implications of the household exemption under the GDPR, with regard to the personal data of guests or other individuals temporarily present in their houses. Based on the relevant interpretations of the elements constituting control and joint control, as given by the Art. 29 Working Party and by the European Court of Justice (in particular in the landmark cases Wirtschaftsakademie, Jehovah’s Witness, Ryneš, and Fashion ID), this paper shows how the definition of joint control could be potentially stretched to the point of including the owners of smart speakers. The purpose of the paper is, however, to show how the preferred interpretation should be the one exempting owners of smart speakers from becoming liable under the GDPR (with certain exceptions), in the light of the asymmetry of positions between individuals and companies such as Google or Amazon and of the rationales and purposes of the GDPR. In doing so, this paper unveils a difficult balancing exercise between the rights of one individual (the data subject) and those of another individuals (the owner of a smart speaker used for private and household purposes only).
Keywords Joint controllers; smart speakers; data protection; vocal assistants; Google; Amazon Cite as: Silvia De Conca, "Between a rock and a hard place: owners of smart speakers and joint control" (2020) 17:2 SCRIPTed 238 https://script-ed.org/?p=3884 DOI: 10.2966/scrip.170220.238 * PhD researcher, Tilburg Institute for Law, Technology, Markets, and Society (TILT-LTMS), University of Tilburg, Tilburg, The Netherlands, s.deconca@tilburguniversity.edu 1 Introduction The European regime for personal data protection relies, among others, on assigning rights and duties to three main actors: i) the data subject, that is the (identified or identifiable) natural person whose personal data are being collected and processed; ii) the controller, the natural or legal person (including public entities and authorities) which determines the whys and hows of the processing of said personal data; and finally iii) the processor, the natural or legal person (including public entities or authorities) carrying out the processing on behalf of the controller.
[1] The above-mentioned distinction was originally made based on the approach that processing was organized almost as a chain-of-assemblage or an industrial process. In practice, however, there have been many cases in which the three roles have partially overlapped, with data subjects or processors becoming also controllers. Factual circumstances have given life to the idea of pluralistic control, later synthetized in the GDPR as joint controllership, as will be explained more in details in part 3 below. Under pluralistic or joint controllership, multiple parties can be considered controllers and are, as such, subjected to the set of duties established by the GDPR.
This paper analyses if, and how, the owners of smart speakers powered by intelligent vocal assistants, such as Amazon Echo (powered by Alexa) and Google Home (powered by Google Assistant), can be considered joint controllers under the GDPR with regard to the personal data of guests or other individuals temporarily present in their houses.
After a brief overview of how smart speakers and intelligent assistants work, part 3 will explain what the current definitions of controller, separate and joint controllership are, based on the GDPR but also on the relevant case law and guidelines issued under the previous regulatory regime, when still applicable. Part 4 will then explain how the current landscape shapes the roles that an owner of a smart speaker can assume under the GDPR: Data Subject and de facto separate controller. The role of the household exemption in relation to owners of smart speakers will also be discussed in part 4. Finally, in the conclusions, I will explain why the preferred interpretation should be the one exempting owners of smart speakers from becoming liable under the GDPR (with certain exceptions), also in the light of the reality of the position of individuals vis-à-vis big companies and of the purposes and intentions of the European legislators.
2 Technological background: What are smart speakers and what do they do? Devices like Google Home and Amazon Echo are often referred to as smart speakers. The result of crossbreeding Internet of Things (IoT), Artificial Intelligence (AI), Networked Robotics, Domotics, and Ambient Intelligence, these small items of furniture have significantly gained popularity in the last two years in both the United States and Europe.
[2] Smart speakers can be connected to a plethora of Internet-connected devices, which can be controlled through them: from smart TVs and fridges to smart locks, thermostats, switches and lightbulbs, and even to smart mattresses, coffee machines, adult toys, toothbrushes, closet organizers, dishwashers, and security cameras.
[3] At the centre of this network of inter-connected devices stands the intelligent vocal assistant ‘contained’ by the smart speaker. The assistant represents both the voice with which users interface, and the software carrying out the tasks requested by the owner or another user. The very core of the smart speaker is, therefore, the assistant, the software capable of carrying out all the actions requested by the users.
From now on, the general term smart speaker(s) will be used, with the caveat that, for the purposes of this paper, this term refers to the combination of both the physical embedding (the speaker) and the software (the vocal assistant). With regard to the individuals interacting with smart speakers, the term owner will be used, even though arguments developed in this paper with regard to the qualification as controller can also be replicated for those users who do not legally own the smart speaker (in most cases).
Smart speakers collect data from their own sensors as well as the sensors present on the connected devices and process them in Cloud. In particular, with a procedure that appears similar for both Amazon and Google, the very first activation of the smart speaker coincides with the request to download on the user’s phone the relating app. Via the app, the user is asked to consent to both the Terms and Conditions and the Privacy Policy relating to the vocal assistant and synch it with pre-existing accounts (or create a new account if necessary). From that moment on, any user can wake up the assistant using a trigger word, followed by the request for a task or by a command. Starting from a fraction of second before the trigger word, until the completion of the task requested, the smart speaker streams and records everything in Cloud, where it processes the data and keeps logs of the recorded requests. The logs can be accessed via the app and deleted or, in the case of Alexa, a vocal instruction can be given to the assistant to delete the logs.
[4] The information collected via the smart speaker and the connected devices is processed together with information coming from other sources (such as the purchase and Internet surfing history connected to the Amazon, Google, or even other accounts of the users), and other databases. The additional information deriving from said processing is then used for, according to the Privacy Policies of the devices: fulfilling the commands, personalising the experience, improving the natural language processing capabilities of the assistant, advertising, marketing and other business-related purposes.
Two additional elements should also be taken into consideration. One, particularly relevant for the purposes of this paper, concerns voice-matching. Amazon allows to create a general profile that associates multiple devices to the same house (called household profiles). Within the household profiles, individual profiles for each adult living in the house can be created. Amazon expressly states that voice profiles cannot be created for children. This, however, does not mean that children cannot use the smart speaker: they can request things to the device, which will comply, but no personal profile can be created for them, therefore the various requests will be recorded but not associated to an identity.
[5] Each profile can then be connected to a voice that Alexa learns via a specific function. Similarly, Google Assistant has a voice-matching function that allows users to be registered with a certain device and matched with their voices. In this way, the assistant recognises from the voice who is interacting with it, and only provides the content relating to his or her profile. This is particularly relevant for functions such as emails, messages, reminders, alarms, but also with regard to preferences concerning music or news. Google Home even has a guest mode that can be enabled by one of the registered users, thanks to which users can cast content through Google Home (via a Chromecast). Children can be associated to a voice profile, but mandatorily with the authorisation of an adult, and their profile is then subject to limitations, such as the impossibility to play YouTube videos or make online purchases.
[6] Google Home also keeps the voices it cannot match with a profile into a separate log which can be accessed and, in case, deleted by the owner or one of the registered profiles.
[7] Both devices also present a mute button, which the owners can push to prevent the device from listening at all (including searching for, and responding to, the wake word).
Finally, it should be pointed out that the assistants support applications that represent new capabilities and uses (in the case of Alexa the applications are not called apps, but skills). Skills and apps are available on the usual app stores, and they are often developed by third parties based on an application programming interface (API) made available by Amazon and Google.
[8] Having established who, or better what, Alexa and Assistant are, part 3 below will provide a summary of the way in which controllership is defined in the context of the GDPR (and the previous Data Protection Directive [9] ), focusing in particular on the interpretation of the notion of joint controllership and of de facto pluralistic control (also referred to as separate control).
3 Data controller, separate and joint controllership The definition of controller established by the GDPR was already introduced in almost exactly the same terms by Directive 95/46/EC, which followed the footprints of the Council of Europe’s Convention 108.
[10] According to art. 4(7) of the GDPR, the controller is: “the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data”. Identifying the controller (as well as the processor) is particularly important in the system of the GDPR because it is based on this qualification that a set of obligations is assigned.
[11] Establishing who is the controller of the processing of personal data is, therefore, necessary in order to allocate responsibility.
[12] The Article 29 Working Party, in its Opinion 1/2010 on the Concepts of “Controller” and “Processor”, highlights the three main elements composing the definition of controller: (i) which subjects can be controllers (natural or legal persons, as well as an array of public entities), (ii) the potentially plural nature of control, and (iii) the qualifying circumstances (the determination of the purposes and means of the processing). Since the first one is not relevant for this paper, only the second and third element will be briefly analysed below.
Whether a subject is a controller is established based on a factual evaluation, meaning that any formal appointment of a subject as a controller does not matter if, de facto , that subject “ actually is not in the position to ‘determine’” [13] (emphasis added) the purposes and means of the processing. The use of the term ‘actually’ implies that factual circumstances need to be taken into consideration. Based on the factual circumstances, more than one subject can also be deemed controller, as expressly stated by article 4(7). This circumstance is generally indicated with the term ‘joint control’. Under the regime of the Data Protection Directive, some doubts had arisen concerning the concrete application of joint control. Due to the complexity of the processing of data, in fact, different possible forms of joint control had been identified. Joint control could be exercised, for instance, on the entire processing or on one of its stages only, having therefore different controllers for each stage. From the perspective of the purposes, each controller could have different purposes for the same processing of the same data, or, alternatively, all the controllers could share the same purposes. Overall, the reality of how the processing takes place can give life to a looser or closer relationship among controllers.
[14] Consequently, controllers are not necessarily responsibile for the all the obligations relating to data protection. Under the Data Protection Directive, this created some confusion as to how responsibilities were to be divided among controllers.
[15] The Art.29 Working Party, in the abovementioned Opinion 1/2010, highlighted how this could “lead to undesired complexities and to a possible lack of clarity in the allocation of responsibilities. This would risk making the entire processing unlawful due to a lack of transparency and violate the principle of fair processing.” [16] To add to the confusion, under the Data Protection Directive it was not fully clear whether joint controllers were subject to joint and several liability.
[17] During the roughly twenty years of the Data Protection Directive being into effect, the European Court of Justice (ECJ) has often been addressed to clarify the issues connected to joint control. Two particularly significant cases are also very recent. These cases were issued after the GDPR entered into effectiveness but refer to the Data Protection Directive as they started before the 25 th of May 2018: the Wirtschatsakademie and the Jehovah’s Witness cases.
[18] In the first case, the ECJ affirmed that the administrators of a Facebook page are joint controllers together with Facebook for the processing of the data of the followers of their page.
[19] The decision has caught the attention of both experts and the general public, due to the very expanded notion of controller it entails. The Court points out that it goes without saying that the administrators of a Facebook page do not have any negotiating power concerning Facebook’s terms and conditions, and that Facebook is the main party responsible for most of the processing. Nevertheless, they select the criteria based on which Facebook will direct a certain target audience to them. Furthermore, by accepting Facebook’s offer to provide them with statistical (and as such anonymous) data on the users visiting their page, they indirectly trigger the installing of cookies on the computers of those visiting the page. Administrators of a Facebook page establish a purpose (statistical data) and, with the very creation of the Facebook page, the means too. The request for statistical information acts as a trigger for the use of the cookie (of which users were, furthermore, not informed). The interpretation chosen by the Court in this case expands the definition of controller. As a starting point, the Court affirms that page administrators are “enablers” (or, in the words of the Art. 29 Working Party, “facilitators”) [20] of the processing. Building on that, the Court affirms that they are also (partially) beneficiary of the processing, specifying that they can be controllers even if they don’t have access to the personal data [21] and don’t have any power vis-à-vis the primary controller.
In the Jehovah’s Witness case, the Court established that a religious institution was a joint controller, together with its members, of the data processing occurring during door-to-door preaching activities. Said activities were carried out by the members, but coordinated, organized and encouraged by the institution. In the decision, the court stressed how the idea of a plurality of controllers is a given of the data protection regime, and serves the purpose of ensuring an adequate protection to Data Subjects. Furthermore, the Court also confirmed the position of the Art. 29 Working Party in affirming that the existence of joint responsibility does not necessarily imply equal responsibility of the various operators engaged in the processing of personal data. On the contrary, those operators may be involved at different stages of that processing of personal data and to different degrees, so that the level of responsibility of each of them must be assessed with regard to all the relevant circumstances of the particular case.
[22] For both the Court and the Art. 29 Working Party, therefore, joint control can assume different forms and relate to more loosen or tighter relationships among the controllers.
Following the direction pointed out by both the Art. 29 Working Party and the ECJ, the European legislator has addressed the main issues concerning joint control in the GDPR. Art. 26 of the GDPR, in fact, establishes that whenever two or more parties are involved in the determination of the purposes and means of processing, and are therefore joint controllers, they shall: “determine their respective responsibilities for compliance with the obligations under this Regulation”. The allocation of responsibilities shall occur by means of an agreement, which shall: i) be a truthful representation of the factual control and consequent responsibilities of each controller, ii) made available to Data Subjects, and iii) indicate the contact point for the Data Subject. Finally, art. 26 clarifies the nature of the relationship among joint controllers, by establishing in its third paragraph that, no matter the terms of the agreement, “the data subject may exercise his or her rights under this Regulation in respect of and against each of the controllers”. The wording of the article confirms that joint controllers are bound by joint and several liability. This implies that the individual controller that has been addressed by the Data Subject can obtain redress from the other controllers for their part of responsibility.
With regard to joint control, van Alsenoy proposes to distinguish the situation in which multiple controllers independently from each other process data, each for their own purpose. He named this circumstance “separate controllers”, and explains that the independency of the processing can occur even if the separate controllers transfer the data from one to the other.
[23] If, on the other hand, multiple controllers “jointly exercise decisions-making power concerning the purposes and means of the processing” then the terms “joint controllers” or “co-controllers” would apply.
[24] According to van Alsenoy, the line between separate and joint control can be blurred by business practices and technology-related circumstances. However, if the controllers pursue different objectives via different means, it is reasonable to consider them separate controllers. This interpretation, while it has not necessarily found any official confirmation by the Courts or administrative authorities, can be seen in line with the wording of art. 26 of the GDPR, which expressly affirms that the multiple actors involved in the processing shall enter into an agreement to determine their share of responsibility. This case would be easily identified as joint control, as stated by the very title of art. 26. It is not clear, however, how art. 26 should apply to the case of de facto multiple control not regulated by an agreement, which would be the case of separate controllers. It is reasonable, based on the previous interpretations given to the institute of joint control, that the mere incipit of art. 26 (“Where two or more controllers jointly determine the purposes and means of processing, they shall be joint controllers.”) is enough to establish solidary liability among them too.
The third element constituting the definition of controller is the determination of the means and purposes of the processing. This element represents the very core of the definition of controller, and has to be interpreted in a factual way, as mentioned above. Means and purposes shall be intended, in this context, as the “how” (for instance technical and organizational elements) [25] and the “why” of the processing. The interpretation of the word “determine” has, however, raised several doubts in the past, as it has briefly been mentioned with regard to the Wirtschaftsakademie case. According to the Art. 29 Working Party, the evaluation revolves around what “level of influence” [26] is necessary to qualify an entity as a controller. It is, at this point, worth noting that, according to the Working Party: “while determining the purpose of the processing would in any case trigger the qualification as controller, determining the means would imply control only when the determination concerns the essential elements of the means ” (emphasis added).
[27] Essential elements, would be, for example and not as an exhaustive list, the kind of data to process and the duration of the processing.
[28] Once the controller(s) have been identified, the GDPR applies unless the processing falls within the so-called household exemption. The household exemption, already existing under the Data Protection Directive regime, is maintained by the GDPR. The household exemption establishes that the GDPR does not apply to the processing of personal data carried out “by a natural person in the course of a purely personal or household activity”.
[29] Purely personal or household activities is used to identify all those activities falling within the management of a house, of a family, or of personal life.
[30] Activities falling within the professional, working, or charitable field do not qualify for the household exemption, regardless of whether they take place in the house or not.
[31] A recent decision concerning the boundaries of the household exemption that is particularly relevant in this regard is Ryneš.
[32] In this case, in fact, the Court established that an individual recording images with a security camera is indeed a controller. For the Court, the circumstance that the security camera was pointing at the entrance of his/her house, therefore at a public space, and not only to the inside of the house, excludes the application of the household exemption. It is evident that the location does play an important role in evaluating whether the processing would fall within the exemption, but that the very nature of the activities itself must nevertheless be personal or familiar.
[33] Recital 18 of the GDPR, following the suggestions elaborated by the ECJ and the Art. 29 Working Party during the previous decade, expands the scope of the household exemption to include not only traditionally private activities such as correspondence, keeping a diary, and the holding of addresses, but also “social networking and online activity undertaken within the context of such activities.” [34] While the recital does not elaborate more in detail on the topic, it is reasonable to affirm that, based also on the position of the Art. 29 Working Party, not all online and social network activities qualify for the household exemption. According to Opinion 5/2009 on social networking activities, in fact, three circumstances shall be analyzed in order to assess whether activities on a Social Network are ’purely personal or household’. From the point of view of the purpose of the use of a Social Network, acting on behalf of a company or association, or acting towards “commercial, political or charitable goals” [35] excludes the application of the exemption. From a more formal perspective, the use of an ‘open profile’ also excludes the application of the household exemption.
[36] Having an open profile means that the user of a Social Network does not limit the fruition of the content to a contact list of known users (such as family and friends), but opens the content to every single user of the Social Network, including non-contacts, or makes it indexable by search engines. On the same topic, it should also be considered that for the Art. 29 Working Party the amount of contacts on a Social Media profile matters, since: “A high number of contacts could be an indication that the household exception does not apply.” [37] Worryingly, for me as well as for any other Social Network user, what constitutes a high number of contacts has not been clarified.
[38] Normally, the owner of a smart speaker is considered only a Data Subject, as such entitled to the protection of his/her personal data via the rights established by the GDPR. However, doubts arise concerning the role of the owner of a smart speaker vis-à-vis those individuals that might come into contact with said devices inside the home of the owner: guests, occasional third parties, or domestic helpers. Personal Data of temporary guests, in fact, can be recorded by the smart speaker in two ways. Guests can be recorded by accident, as background noises, or if they decide to use the smart speaker in the first person, for example by requesting the assistant to play a song, find information online, look for a take away restaurant, and so on. In the first case the smart speaker would be activated by the owner, and the voice of the guests would be processed in order to distinguish the owner’s order from the background noise and avoid mistakes of the assistant. In the second case the guest would voluntarily awake the device to request something.
[39] Part 4 below analyzes whether in such cases the owner of a smart speaker can be considered also a controller, based on what has been discussed so far.
4 “Alexa, am I a data controller?” In order to evaluate whether the owner of a smart speaker can be qualified as controller, we should consider whether the conditions explored in part 3 are met. In order to do so, after having acknowledged the existence of the requirement of legal or natural persons as controllers, this part will focus on two possible interpretations of the element of control. In particular, this part will focus on the relationship between owners of smart speakers and the producers. Subsequently, the household exemption will also be discussed. Since both natural and legal persons qualify to be controllers, this constituting element of control is easy to match in the case of an owner of a smart speaker. As per the second element, the possible plurality of actors, the complexity of AI and the IoT, especially when combined, makes the existence of separate and joint control particularly common.
[40] It might, therefore, happen that, besides the producers of a smart speaker, and apps/skills developers, other subjects might be considered controllers. However, which part of the processing does the owner or the user of a smart speaker have actual control over? The less likely hypothesis is that the control of the owner is on the entirety of the processing, with a congruence of his/her purposes and means with the purposes and means of, for instance Amazon, Google, or the third parties providing the apps and skills. Due to the way these devices work, in fact, it is almost impossible for the owner to have control over the purposes of the processing. The technologies behind smart speakers are owned by the companies providing the goods and services, and are often protected by Intellectual Property or other rights. The only part on which the owner might have a form of control is whether activating the device, instead of using the mute button to prevent the device from listening, once the guests are in the house. This, however, could be interpreted in an extensive manner as to integrate the first stage of the processing: the data collection. The devices record and store every sound happening within the range of their sensors for the entire duration of a task, including background noises and conversations. In the case of Google Home, it has been explained before how it even keeps the recordings of non-identified users in a separate log. Furthermore, even if a task isn’t ordered, they constantly scan the environment looking for the wake word, immediately deleting the sounds scanned, but nevertheless initially collecting and processing them even if for very short fractions of time. These circumstances imply that the voice or other personal data of guests are collected and, sometimes, even further elaborated. This constitutes processing. Similarly to the Wirtschaftsakademie case, therefore, it could be argued that the owner works as a ‘facilitator’ that triggers the collection by activating and not muting the device (which would be a way to prevent or interrupt the processing).
[41] It should also be reminded at this point how, in order to be a controller, a subject needs not have access to the personal data processed.
[42] Having established that there is a stage which the owner of smart speakers can have control on, whether said control includes some or all of the purposes and means of the processing constitutes the third element. With regard to the means, an extensive interpretation of the provisions of the GDPR could consider that the choice of the means occurs when the owner chooses the smart speaker. In this interpretation, the smart speaker would represent the means for the collection stage of the processing. Obviously, the means for the stages of the processing occurring in Cloud would be under the sole control of the producers/providers. These latter, similarly to Facebook in the Wirtschaftsakademie case, would be the primary controllers due to the extensive decisional power they exercise.
[43] This, however, leaves room to the possibility that the owner of the smart speaker is not a controller. It has been explained above that the decisional power over the means only qualifies as integrating control when it concerns the “essential elements”, such as which data are processed, and for how long. It can be debated whether deciding simply which device is in use, and when, can count as an essential element of the means. With regard to the purposes, it is worth repeating that their determination is enough, in any case, to appoint the role of controller on a subject.
One interpretation, also quite extensive, could be that the purposes are decided by the owner based on which task and, therefore, which skill or app is activated. This reconstruction does not seem to match with the reality of the processing, however. Let’s take the case of guests being accidentally recorded as background noise. If the owner of a smart speaker wants to play some music, the owner does not want to process the conversation happening in the background. The recording is not connected nor functional to the purpose of playing a song. It is, as said, accidental and does not serve any purpose of the owner. Let’s consider instead the other possible option, that the guests are recorded for the fraction of a second while the device searches for the trigger word. This function is not activated by the owner and it might require a stretch to consider the purpose of finding the wake word as a purpose established by the owner. On the opposite, it is undebatable that the producers of the smart speakers and their apps determine the purposes for the processing, and determine how the technology works. It can be debated whether the owner has a factual (actual) decisional power over them, to the point of become a controller.
[44] Even considering the extensive interpretations of the requirements necessary for the owners of smart speakers to become separate controllers, for the mere collection stage and, possibly, only with regard to the means, it should be verified whether the household exemption applies.
There are, currently, thousands of apps or skills available for Amazon Echo and Google Home.
[45] The capabilities of these devices range from activating other devices inside the home, such as light switches, appliances, locks, or indeed security cameras, to reading the news, telling the weather forecast, reading aloud emails and messages, taking photographs, sending email and messages, finding recipes, keeping a list of the groceries, playing songs, buying online, or posting content on Social Networks. The vast majority of these capabilities can be reasonably catalogued as personal, family, or household related activities. With regard to activities carried out online or on Social Networks, according to Recital 18 of the GDPR they would also fall within the household exemption. The exception would be if they are carried out for commercial, [46] political, or charitable purposes or the Social Network profile of the owner is open. In the latter case the actions of the owner would be considered as making information available freely on the Internet. Doubts could also arise based on the number of contacts of the owners on the Social Network platforms used.
A particularly grey area might be the data of personnel hired to work inside the house, for instance for cleaning or maintenance. Whether the accidental collection of their data might fall within the household and personal affairs, remains unclear.
Furthermore, based on the Ryneš case it could be argued that the household exemption would not apply if the collection of the data does not occur within the house, but outside, in a public space such as the outside street or communal areas of an apartment building. It shall be pointed out in this regard that, while the devices are in most cases located inside the house of the owners (according to Google in the 75% of cases in the living room of the owners), [47] their sensors could be also located on the outside (such as security cameras, which would integrate a case extremely similar to the abovementioned Ryneš ). The sensors could be powerful enough to catch sounds from outside the house of the owner (for instance from neighboring apartments or the hall outside the entrance of the house).
In the latter cases, the fact that the collection would occur outside of the household environment could be enough to exclude the application of the household exemption, making the owner a separate controller liable under the GDPR (if the extensive interpretation of the means and purposes is applied too).
5 Conclusions: a plea against the extensive interpretation of art. 4(7) GDPR As it frequently happens, especially in the field of Data Protection, the question of whether the owner of a smart speaker should be considered, besides a Data Subject, also a controller vis-à-vis guests or other people temporarily in the house, should be answered on a case by case basis. As this paper has explained, there are several factors to be considered in order to appoint the role of controller and to assess the applicability of the household exemption.
Some care should be taken in interpreting the requirements established to identify the controller. In this regard, particularly relevant is the way in which control over the means and purposes is interpreted. Doubts might arise on whether the choice of having and using the smart speakers per se constitutes control over the means through which the data are collected. It is also debatable whether the choice of which app to activate might constitute control over the purposes for which the collection is carried out.
In this regard, I want to put forward some arguments against the extensive interpretation of the definition of controller in the case of owners of smart speakers.
The overall premises, highlighted by both the ECJ and the Art. 29 Working Party with regard to control, are that the assessment shall be made based on the factual conditions of the processing, and the controller shall have actual decisional power over certain fundamental aspects of the processing. However, both the ECJ and the Art. 29 Working Party have often chosen not to consider as significant a very concrete and factual condition that can affect the autonomy of the prospective controller: the unbalance of power between certain providers of services or products, such as Facebook, Google, or Amazon, and smaller private parties. This unbalance is even bigger in the case of non-professional individuals. In Wirtschatfsakademie, the Court follows the idea of the late Advocate General Bot that once a person has accepted terms and conditions on which he or she has absolutely no negotiating power, then nevertheless the person “may always be regarded as a controller, given his actual influence over the means and purposes of the data processing.” [48] Said influence merely being choosing to use the service instead than not using it. On the one hand, the primary role of certain providers and producers is recognized. The impossibility of individuals to negotiate or affect any aspect of the service or product, is also acknowledged. On the other hand the actual influence on the processing, necessary to qualify the controller, is deemed to be all enclosed in the mere choice of not walking away from a service or product (which, in many cases, comes with significant social and peer pressure).
[49] In the Wirtschaftsakademie case the position of administrators of a Facebook page, especially for a business, can make said reasoning in part justifiable. However, in the context of that case both the Court and the Advocate General affirm that the lack of power of individuals vis-à-vis the providers or producers does not exclude control in general. This is a dangerous formulation, and it should not be interpreted as opening the way to its application to other cases in which there is a power imbalance between potential controllers. To justify this extensive interpretation, the Advocate General affirms that holding more subjects liable, regardless of who those subjects are and what is their actual power on the processing, can have a positive ripple effect on big providers and producers. It could push them to a more careful compliance with the GDPR.
[50] I don’t find this justification very convincing. Besides being unsupported, it also fails to consider how holding multiple actors liable fragments the liability of these primary controllers, and creates the possibility for dangerous loopholes. Overall, it does not really add any particular benefit to a damaged party seeking for redress, at least from the monetary perspective, since individuals most likely have a limited patrimony if compared to companies and corporations.
If we consider the position of an individual owning a smart speaker, the unbalance of power vis-à-vis the producers and providers becomes even more significant. Interpreting in an extensive manner the requirement for control and, consequently, deeming the owners of said devices controllers together with companies such as Amazon or Google with regard to guests (equating owning a product to being ’facilitators’), would mean to ignore the factual circumstances and ignore the necessity for actual decisional power. This is even more so if we consider average users might not even be fully aware of the functioning of the smart speakers. Considerations concerning the connection between effective decisional power and responsibility, which are already being discussed extensively with regard to consent in the GDPR, should be part of the discussion with regard to control too. It is, in this sense, comforting that a more moderate position is being taken by Advocate General Bobek in the context of the Fashion ID case. The case concerned the possible joint controllership between a website and Facebook due to the fact that the first has a plug-in on its website to automatically “like” the relating Facebook page of the company. In his Opinion the Advocate General has already taken a position against an extensive interpretation of the definition of controller, highlighting that attributing responsibility to a subject that is not in control of the processing would be unjust, and it would not help the Data Subject.
[51] According to Advocate General Bobek, in fact, if the alleged co-controller does not have any actual control over the processing, the Data Subject cannot see his/her rights enforced. If, for instance, the Data Subjects exercises the right of access against a controller without actual control, this latter will not possibly be in the position to provide the Data Subjects with his/her personal data… because the controller does not even have availability of them.
[52] It shall be pointed out that AG Bobek overall did not exclude the existence of joint control for the specific case of Fashion ID.
In its final decision the Court, following the path of the Wirtschaftsakademie decision, confirmed that Fashion ID is joint controller together with Facebook, but only for the initial stage of the processing.
[53] However, the Advocate General appears to share the same concerns with regard to expanding the definition of control in an indiscriminate and generalized way.
Furthermore, in interpreting the requirements for the existence of pluralistic control, particular care should be put with regard to the sharing of purposes and means. In the case of the owners of a smart speaker, the purpose is using the device. As explained above, such use gives life, often accidentally, to the collection of data of the guests. However, the owner does not need nor want the data of the guests in order to achieve the purpose. This is a matter which, in other contexts, would be dubbed collateral damage. On the other hand, the producers and providers of the smart speaker do need all the data available in order for the device to properly function (and for other purposes, such as marketing, statistics, and so on). The possibility of separate, independent control could still be open, but in this regard the arguments made above with regard to the actual control of the individual owners and the power unbalance should be considered.
It should be noted how the very same Art. 29 Working Party has also held in the past, with regard to owners of Internet of Things devices, a more moderate position. In its Opinion on the matter, in fact, after having considered privacy and Data Protection implications of the IoT, the Working Party concluded that: Users of IoT devices should inform non-user data subjects whose data are collected of the presence of IoT devices and the type of collected data. They should also respect the data subject’s preference not to have their data collected by the device.
[54] This indication has been followed by Google, which on the website entirely dedicated to Google Home expressly recommends owners to inform their guests of the presence of the device and make avail of the mute button.
[55] From a systemic perspective, besides being more in line with the element of the factual evaluation of the position of a controller, the approach I propose is also consistent with the European system of data protection as a whole. The GDPR has, in fact, as purposes protecting individuals and fostering the internal market.
In terms of fostering the internal market, it could even be argued that making the owners of smart speakers controllers risks to have a chilling effect on the market, with some potential buyers opting not to purchase the devices to avoid the risk of being sued by angry friends or neighbors.
Contrary to the abovementioned opinion of Advocate General Bot in the Wirtschaftsakademie case, I believe that including the owners of smart speakers among the controllers does not grant a higher degree of protection to individuals. It does not offer higher protection to the guests, since the positive ripple effect has so far not been proved and appears to be more wishful thinking than reality. Following the Wirtschaftsakademie and Fashion-ID cases, in fact, so far no change appears to have occurred in the behaviour of the principal controller, Facebook. This latter has not modified the functioning of its like buttons or cookies. The additional responsibilities assigned to small, local enterprises which mostly necessitate Facebooks’ services in order to be visible and gain potential customers do not appear to have put pressure on Facebook. This approach actually risks to increase legal uncertainty, [56] due to the fragmentation of the liability that otherwise would entirely lay on corporations (and therefore would keep corporations fully accountable and act as a deterrent too). In the words of Advocate general Bobek: Making everyone responsible means that no-one will in fact be responsible. Or rather, the one party that should have been held responsible for a certain course of action, the one actually exercising control, is likely to hide behind all those others nominally ‘co-responsible’, with effective protection likely to be significantly diluted.
[57] It also does not protect the owners of smart speakers, which are individuals as well as (over-burdened) [58] data subjects, and would find themselves projected in a role that affirms them in control, while in reality they do not have any power vis-à-vis the companies providing the devices and the software. In the words of Advocate General Bobek: “no good (interpretation of the) law should reach a result in which the obligations provided therein cannot actually be carried out by its addressees.” [59] The paradoxical nature of this situation emerges more clearly if we draw a parallel with another institute of the law that is often associated with Data Protection: product liability.
[60] Imagine individual A being sued by another individual, named B, based on product liability. B has been damaged by A’s domestic appliance, while B was a guest at A’s house. Let’s exclude that A has misused the domestic appliance, since the damage was entirely caused by a fault in the product. Within the regime of product liability such claim would have no basis, as it would not be enough that A, by buying and using the product, acted as an enabler or a facilitator of the damage. Why would, then, this reasoning be applied in the case of data processing by another, very evolved, form of domestic appliance? The final argument in support of my position comes from the system of the law. In the relationship between the owner of smart speakers and the guest, in the case of damages deriving to this latter from the actions or functioning of the device, other legal protections still apply, therefore not leaving the damaged subject without remedies. Civil law protection, such as tort/extra-contractual liability or even criminal law would cover the relationship between the two besides the regime of the GDPR.
[61] In other words, the system of the law already has it covered, at least with regard to the remedies for damages.
[62] Regardless of the qualification of the owner as controller, it has been pointed out how the qualification of online and Social Network activities as purely personal or domestic, as well as the positioning within the domestic environment, make it reasonable to apply the household exemption in most cases. However, circumstances such as the openness of the Social Network profile or of other platforms on which the data might be published via the smart speakers, the number of contacts the owner has on a Social Network, or possible commercial, political or charitable purposes, might exclude the application of the household exemption. Similarly, if some of the sensors of the smart speakers collect data from outside of the house, from public spaces or spaces belonging to other individuals (such as the neighbors), the household exemption would likely not apply. In most of these cases, however, (the only exception possibly being the carrying out of commercial, political, or charitable activities) the arguments against an extensive interpretation of the notion of controller still stand.
In conclusion, while an extensive application of the GDPR can help tackling the challenges that arise from new, complex technologies, the expansion should be done reasonably and carefully. In the case of the extensive interpretation of the concept of controller, careful consideration should be given to all factual circumstances, including the palpable unbalance of power between the parties involved, especially to avoid undesired consequences which would result in not obtaining the highest possible degree of protection of individuals, all the individuals involved (included the individual owning a smart speaker and using it for private and household purposes only). The case of the owner of a smart speaker is, in this sense, a perfect example in which the role of (joint)controllers should be left only for those actors, such as the producers and providers of the hardware and software, within which lies the real and concrete decisional power concerning means and purposes.
So far the case law concerning joint control has seen peculiar actors being appointed controllers side-by-side with big corporations providing the very service which originated the processing. In the above-mentioned Wirtschaftacademie and Fashion-ID, in fact, the joint controllers were all small and mediums businesses. While the unbalance of power could be understandably disregarded in the case of companies/businesses being joint controllers, there are additional elements that differentiate an individual owning a smart speaker.
In primis, the individual owner most frequently uses the smart speaker for private and family life. From the perspective of the technological and commercial reality, most non-professional individual users lack the knowledge, expertise and (legal and material) capability to affect the processing and the technology that is in a sense imposed on them. I believe these factual elements can guide the courts and, possibly, the European legislator in separating individual owners from producers of smart speakers. In other words: let’s not make the owner of an Echo or Home a controller for the mere fact of putting guests within the range of their sensors.
[1] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1. Hereinafter GDPR.
[2] See, for instance, the U.S. Smart Speaker Adoption Report 2019 compiled by Voicebot and Voicify, available at https://voicebot.ai/smart-speaker-consumer-adoption-report-2019 (accessed 28 July 2020).
[3] Michael Simon, “Google Assistant works with over 5,000 smart devices, but Alexa is far in the lead with 12,000” ( Tech Hive , 03 May 2018), available at https://www.techhive.com/article/3269825/google-assistant-5000-smart-devices.html (accessed 11 July 2019).
[4] Jacob Kastrenakes, “Amazon now lets you tell Alexa to delete your voice recordings” ( The Verge , 29 May 2019), available at https://www.theverge.com/2019/5/29/18644027/amazon-alexa-delete-voice-recordings-command-privacy-hub (accessed 11 July 2019).
[5] Please note that, at the time of writing, the U.S. Federal Trade Commission has started an investigation into Amazon for alleged violations of the Children’s Online Privacy Protection Act (COPPA) with regard to a specific smart speaker, the Echo Dot, marketed as kids friendly. See, for instance, Makena Kelly, “Amazon’s kid-friendly Echo Dot is under scrutiny for alleged child privacy violations” ( The Verge , 09 May 2019), available at https://www.theverge.com/2019/5/9/18550425/amazon-echo-dot-kids-privacy-markey-blumenthal-ftc (accessed 26 June 2019).
[6] “Let your child use the Google Assistant on your speaker or Smart Display” ( Google Assistant Help ), available at https://support.google.com/assistant/answer/9071584?hl=en&ref_topic=7658509 (accessed 21 June 2019).
[7] “Guests & Google Home” ( Google Nest Help ), available at https://support.google.com/googlenest/answer/7177221?hl=en (accessed 26 June 2019).
[8] With regard to the third parties developing the apps or skills, questions arise concerning the possible qualification as processor and/or joint controllers together with companies like Amazon and Google. Another, additional, question also concerns whether the Terms and Conditions to which third parties agree in order to use the API qualifies as an agreement according to art. 26 of the GDPR (see part 3 below). The answer to these questions falls, however, outside of the scope of this paper and shall therefore be left for future research.
[9] Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, OJ 1995 L 281/31. Hereinafter the Data Protection Directive, the Directive, or Directive 95/46/EC.
[10] Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data 1981, ETS 108, art. 2(d). See also Article 29 Data Protection Working Party, “Opinion 1/2010 on the Concepts of ‘Controller’ and ‘Processor’” (WP169, 2010), p. 8.
[11] Andrej Savin, EU Internet Law (Elgar European Law, 2 nd ed., 2017), p. 271.
[12] Opinion 1/2010, p. 4. It shall be noted that while the Opinion refers to the definitions of controller and processor as established by the Data Protection Directive, the circumstance that they have not changed with the GDPR brings the author to affirm the Opinion is still in most of its parts valid today. Some differences concerning joint controllership will be discussed further on.
[13] Opinion 1/2010, p. 8. Being in an actualy position to determine purposes and means, while used by the Working Party in Opinion 1/2010, has not been further defined nor explained, but appears to be used as a starting point to evaluate the factual circumstances leading to the identification of (joint) controllers. I believe, however, that due to the complexity of the current technological landscape, the debate on the matter would greatly benefit from a more in depth analysis of what it means to actually be in the position to determine, by the European legislator and/or the ECJ.
[14] Opinion 1/2010, pp. 18-21; Handbook on European data protection law , pp. 105-106.
[15] Some authors have pointed out that the allocation of responsibilities has not been clarified in the current existing regime either. See Rene Mahieu, Joris van Hoboken, Hadi Asghari, “Responsibility for Data Protection in a Networked World – On the Question of the controller, ‘Effective and Complete Protection’ and Its Application to Data Access Rights in Europe” (2019) 10 Journal of Intellectual Property, Information Technology and Electronic Commerce Law 39-59, paras. 26-28.
[16] Opinion 1/2010, p. 24.
[17] I bid.
22.
[18] Respectively, Case C‑210/16, Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH [2018] ( Wirtschaftsakademie ), and Case C-25/17, Tietosuojavaltuutettu v Jehovan todistajat —uskonnollinen yhdyskunta [2018] ( Jehovah’s Witness ).
[19] As well as of non-Facebook users that would open the page via a website, in the specific case the webpage of the Wirtschatsakademie.
[20] Opinion 1/2010, p. 11.
[21] Wirtschaftsakademie , para. 38.
[22] Jehovah’s Witness , para. 66.
[23] Brendan van Alsenoy, “Liability under the EU Data Protection law: From Directive 95/46 to the General Data Protection Regulation” (2016) 7 Journal of Intellectual Property, Information Technology and Electronic Commerce Law 271-288, para. 2.3.1.
[24] Ibid.
[25] Opinion 1/2010, p. 14.
[26] Ibid.
, p. 13.
[27] Ibid.
, p. 14.
[28] Ibid.
(emphasis added).
[29] GDPR, art. 2, para. 2(c).
[30] Art. 29 Data Protection Working Party, “Opinion 5/2009 on Online Social Networking” (WP 163, 2009), p. 3.
[31] Handbook on European data protection law, p. 103.
[32] Case C-212/13 Ryneš v Úřad Pro Ochranu Osobnich Údajů [2014] ( Ryneš ).
[33] It should be noted that in Ryneš , the existence of a legitimate interest (safety and security) of the individual and the family could still be evoked to justify the processing without the necessity to require the consent of the grabbed individuals.
[34] GDPR, Recital 18.
[35] Opinion 5/2009, p. 6.
[36] Case C–345/17 Buivids [2019], para. 43.
[37] I bid.
See also, Case C-101/01 Bodil Lindqvist v Åklagarkammaren i Jönköping [2003], para. 47.
[38] While art. 2 and recital 18 of the GDPR and the analysis of the Working Party offer some more guidance respect to the Data Protection Directive, the concept of “purely personal or household activity” applied to social network platforms still presents several unclear elements and is object of debate. For an analysis of the situation before the GDPR, but still in part relevant nowadays, see Brendan van Alsenoy et al., “Social networks and web 2.0: are users also bound by data protection regulations?” (2009) 2(1) Identity in the Information Society , 65–79; Patrick van Eecke and Maarten Truyens, “Privacy and social networks” (2010) 26(5) Computer Law & Security Review 535–546.
[39] The latter case can be seen as similar to that of a guest asking to use the Wi-Fi connection at someone’s house. In both cases the owner of, respectively, the smart speaker or the Internet connection makes available to the guests technologies that collect their personal data. In the case of the Wi-Fi access then the processing would relate to the activities of the ISP as well as any other service the guest would make use of while online. In the case of smart speakers, the processing would entirely be carried out by the provider of the assistant (Amazon, Google) and/or the third parties that manage the skills or apps.
[40] Jenna Mäkinen, “Data quality, sensitive data and joint controllership as examples of grey areas in the existing data protection framework for the Internet of Things” (2015) 24(3) Information & Communication Technology Law 262-277, p. 272.
[41] Wirtschaftsakademie , Opinion of AG Bot, para. 56.
[42] Ibid., para. 38.
[43] Ibid., para. 73.
[44] This approach appears in line with that adopted by other scholars with regard to users of social network platforms abiding to these latter’s terms and conditions and the qualification as controllers. See the abovementioned Brendan van Alsenoy et al., “Social networks and web 2.0: are users also bound by data protection regulations?” (2009) 2(1) Identity in the Information Society 65–79; Patrick van Eecke, and Maarten Truyens, “Privacy and social networks” (2010) 26(5) Computer Law & Security Review 535–546.
[45] Greg Sterling, “Google Action vs. Alexa Skills is the next big App Store battle” ( Search Engine Land , 19 February 2019), available at https://searchengineland.com/google-actions-vs-alexa-skills-is-the-next-big-app-store-battle-312497 (accessed 11 July 2019).
[46] A hypothesis which is not particularly remote. Imagine the case of a fashion blogger using Echo Look, a device made to organize the closet and take pictures of outfits, to take the picture of a certain outfit and post it on his/her Instagram account, through which most of his/her revenues come from.
[47] Sara Kleinberg, “5 ways voice assistance is shaping consumer behavior” ( Think with Google , January 2018), available at https://www.thinkwithgoogle.com/consumer-insights/voice-assistance-consumer-experience/ (accessed 11 July 2019).
[48] Wirtschaftsakademie , Opinion of AG Bot, para. 60.
[49] Anabel Quan-Haase and Alyson L. Young, “Uses and Gratifications of Social Media: A Comparison of Facebook and Instant Messaging” (2010) 30(5) Bulletin of Science, Technology & Society 350-361, p. 357.
[50] Wirtschaftsakademie , Opinion of AG Bot, para. 74.
[51] Case C-40/17 Fashion ID GmbH & Co. KG v Verbraucherzentrale NRW e.V.
[2018], Opinion of AG Bobek, para. 91.
[52] I bid.
, para. 84.
[53] Case C-40/17 Fashion ID GmbH & Co.
KG v Verbraucherzentrale NRW e.V.
[2019].
[54] Art. 29 Data Protection Working Party, “Opinion 8/2014 on Recent Developments in the Internet of Things” (WP 223, 2014).
[55] “Guests & Google Home” ( Google Nest Help ), available at https://support.google.com/googlenest/answer/7177221?hl=en (accessed 26 June 2019).
[56] Mahieu, van Hoboken, Asghari, para. 44.
[57] Case C-40/17, Opinion of AG Bobek, para. 92.
[58] Lilian Edwards et al., “Data subjects as data controllers: a Fashion(able) concept?” ( Internet Policy Review , 13 June 2019), available at https://policyreview.info/articles/news/data-subjects-data-controllers-fashionable-concept/1400 (accessed 14 June 2019). In their article, the authors focus on a different technology, namely personal data stores (PDS). Their argument being particularly PDS-specific, I consider them outside of the scope of this work.
[59] Case C-40/17, Opinion of AG Bobek, para. 93.
[60] I acknowledge that, while the product liability regime falls entirely within private law, Data Protection inherently possesses a double nature of both fundamental right and private law, as abundantly debated by privacy and data protection scholars in the past two decades. I believe, however, that the peculiar double nature of Data Protection does not render a parallel with consumer protection invalid; on the contrary, it makes such parallel possible due to the fact that both regimes can be used as tools to protect a weak party in a bi- or multi-lateral legal transaction and are deployed to regulate horizontal asymmetrical relations (even though the fact that Data Protection insists on a fundamental right makes it apt to be deployed in vertical relations too). As an example, a comparison between Data Protection and institutes of private law (including product liability) has been carried out by M. Paun in “Legal Protection in Consumer Financial Services: Source of inspiration for data protection?” ( Amsterdam Privacy Conference , Amsterdam, 5-7 October 2018). It could be argued that, unlike product liability, the processing of personal data creates risks for the fundamental rights of individuals, which could be reason enough to grant a strong protection via an extensive interpretation of the role of the controller. Considering, however, that product liability is a tool created having as a starting point situations in which individuals are damaged in their possession or even in their bodily integrity, the underlying and implicit protected interests appear in both data protection and product liability of great importance.
[61] As stated by the Art. 29 Working Party for the case of damages deriving from Social Networking activities. See Opinion 5/2009, pp. 6-7.
[62] In this regard it shall be noted how the regime of joint and several responsibility of GDPR’s joint control might appear to offer an easy point of contact to a data subject. By activating the remedies vis-à-vis the owner of a smart speaker, a data subject might appear to have an advantage: starting a procedure in a familiar language, in one’s own country. This, however, is valid in any case, since art. 77 GDPR gives data subjects the right to start a procedure before the Data Protection Authority of the country they belong to, or the country of habitual residence, or in which the workplace is located, or where the violation of the rights has occurred, in the official language of said country. Indeed, I acknowledge that the procedure would then be not against a faceless company but against an individual. I do not believe this is a reason good enough to burden individual data controllers who own and use a smart speaker but, as I explain in the article, have limited to no control over its functioning, with a responsibility so big as the one of data controller.
Between a rock and a hard place: owners of smart speakers and joint control August 6, 2020 No Comments ← Editorial introduction Biomedical Data Identifiability in Canada and the European Union: From Risk Qualification to Risk Quantification? → Leave a Reply Cancel reply Your email address will not be published.
Required fields are marked * Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam.
Learn how your comment data is processed.
Home About SCRIPTed Submission Guidelines Blog Books for Review Sitemap Contact us Recent blog posts Timed influence: The future of Modern (Family) life and the law September 10, 2021 Predicting Innovation: Why Facebook/WhatsApp Merger Flunked April 1, 2021 Georgia vs. Public.Resource.org: The Morning After June 15, 2020 Online harms and Caroline’s Law – what’s the direction for the law reform? April 13, 2020 SCRIPTed is turning 15! January 14, 2019 Issues About SCRIPTed Submission Guidelines Blog Sitemap Contact us {{FOOTNOTENUM}}.
" |
183 | 2,021 | "Issue 20:1 (1-281) – SCRIPTed" | "https://script-ed.org/archive/volume-20/issue-1" | "A Journal of Law, Technology & Society https://script-ed.org?p=4150 SCRIPTed A Journal of Law, Technology & Society Menu About SCRIPTed Editorial Board Submission Guidelines Authors Artists Book Reviewers Blog posts Copyright policy Books for Review Issues Current Issue All issues Blog Contact us All issues > Volume 20 > Issue 20:1 (1-281) (2023) 20:1 SCRIPTed 1–281 Issue DOI: 10.2966/scrip.200123 Editorial Editorial Introduction Ayça Atabey and Şimal Efsane Erdoğan, pp. 1-4 Full text PDF Articles Operationalizing Privacy by Design: an Indian illustration Ankit Kapoor, pp. 5-55 Full text PDF This article identifies Privacy by Design [“PbD”] as a suitable regulatory approach to address the attack on personal data in the Fourth Industrial Revolution. It proposes Privacy Engineering [“PE”] as a concrete methodology to operationalize the otherwise vague Privacy by Design. Privacy Engineering operationalizes the normative knowledge of privacy into specific use cases through layers of flexible abstract thinking, interconnected through a “web of templates”. This “web of templates” can be constructed by answering the two-fold question of relevancy and extent of data protection required. PE provides regulators with a specific language in which they can communicate with data controllers to establish privacy obligations and undertake prioritized capacity building for resource-deprived data controllers.
This article also illustrates the application of this methodology through the Account Aggregator Framework and the Aarogya Setu Application. Positioning this method as not just an operational guide but also a rigorous tool of critique, it also evaluates the extent of their compliance. Account Aggregator exceptionally embodies PbD, while Aarogya Setu does so only averagely.
Keywords: Personal Data Protection; privacy by design; privacy engineering; Aarogya Setu application; account aggregator framework.
A Risk-based Approach to AI Regulation: System Categorisation and Explainable AI Practices Keri Grieman and Joseph Early, pp. 56-88 Full text PDF The regulation of artificial intelligence (AI) presents a challenging new legal frontier that is only just beginning to be addressed around the world. This article provides an examination of why regulation of AI is difficult, with a particular focus on understanding the reasoning behind automated decisions. We go on to propose a flexible, risk-based categorisation for AI based on system inputs and outputs, and incorporate explainable AI (XAI) into our novel categorisation to provide the beginnings of a functional and scalable AI regulatory framework.
Keywords: Artificial intelligence, regulation, explainable artificial intelligence, foreseeability, explainability How Will the EU Digital Services Act Affect the Regulation of Disinformation? Sharon Galantino, pp. 89-129 Full text PDF This article examines the self-regulatory framework established by the EU Code of Practice on Disinformation and considers how the EU Digital Services Act [DSA] will affect that framework. Firstly, this article argues that the DSA entrenches the opacity of firms’ partnerships with fact-checking organisations and investigations of coordinated inauthentic behaviour, as well as fails to provide adequate transparency of its newly created redress mechanisms. Secondly, this article argues that, overall, the DSA fails to protect European standards of freedom of expression in the regulation of disinformation, reflecting an uncertainty of how public bodies should regulate the private gatekeepers of information. As these public bodies press private actors to address disinformation—lawful if undesirable expression—the question of the effect of informal state pressure on the horizontal application of fundamental rights gains a sense of urgency.
Keywords: Digital Services Act, Code of Practice on Disinformation, platform governance, disinformation, freedom of expression The Right to Repair: Patent Law and 3D Printing in Australia Matthew Rimmer, pp. 130-202 Full text PDF Considering recent litigation in the Australian courts, and an inquiry by the Productivity Commission, this paper calls for patent law reform in respect of the right to repair in Australia. It provides an evaluation of the decision of the Full Court of the Federal Court in Calidad Pty Ltd v Seiko Epson Corporation [2019] FCAFC 115 – as well as the High Court of Australia consideration of the matter in Calidad Pty Ltd v Seiko Epson Corporation [2020] HCA 41. It highlights the divergence between the layers of the Australian legal system on the topic of patent law – between the judicial approach of the Federal Court of Australia and the Full Court of the Federal Court of Australia, and the endorsement of the patent exhaustion doctrine by the majority of the High Court of Australia. In light of this litigation, this paper reviews the policy approach taken by the Productivity Commission in respect of patent law, the right to repair, consumer rights, and competition policy. After the considering the findings of the Productivity Commission, it is recommended that there is a need to provide for greater recognition of the right to repair under patent law. It also calls for the use of compulsory licensing, crown use, competition oversight, and consumer law protection to reinforce the right to repair under patent law. In the spirit of modernising Australia’s regime, this paper makes a number of recommendations for patent law reform – particularly in light of 3D printing, additive manufacturing, and digital fabrication. It calls upon the legal system to embody some of the ideals, which have been embedded in the Maker’s Bill of Rights, and the iFixit Repair Manifesto. The larger argument of the paper is that there needs to be a common approach to the right to repair across the various domains of intellectual property – rather than the current fragmentary treatment of the topic. This paper calls upon the new Albanese Government to make systematic reforms to recognise the right to repair under Australian law.
Keywords: Patent law, patent validity, patent infringement, patent licensing, implied license, patent exhaustion, patent exceptions, crown use, compulsory licensing, competition policy, consumer protection law, the right to repair, 2D printing, 3D printing, additive manufacturing, digital fabrication, circular economy, sustainable development, Maker Movement, Maker’s Bill of Rights, iFixit, iFixit Repair Manifesto Regulating Manipulative Artificial Intelligence Tegan Cohen, pp. 203-242 Full text PDF AI scientists are rapidly developing new approaches to understanding and exploiting vulnerabilities in human decision-making. As governments around the world grapple with the threat posed by manipulative AI systems, the European Commission (EC) has taken a significant step by proposing a new sui generis legal regime (the AI Act) which prohibits certain systems with the ’significant’ potential to manipulate. Specifically, the EC has proposed prohibitions on AI systems which deploy subliminal techniques and exploit vulnerabilities in specific groups. This article analyses the EC’s proposal, finding that the approach is not tailored to address the capabilities of manipulative AI. The concepts of subliminal techniques, group-level vulnerability, and transparency, which are core to the EC’s proposed response, are inadequate to meet the threat arising from growing capabilities to render individuals susceptible to hidden influence by surfacing and exploiting vulnerabilities in individual decision-making processes. In seeking to secure the benefits of AI while meeting the heightened threat of manipulation, lawmakers must adopt new frameworks better suited to addressing new capabilities for manipulation assisted by advancements in machine learning.
Keywords: artificial intelligence; manipulation; AI Act; regulation; subliminal techniques; vulnerability; transparency The Internet, Internet Intermediaries and Hate Speech: Freedom of Expression in Decline? Natalie Alkiviadou , pp. 243-268 Full text PDF This paper looks at the developments of hate speech regulation online, specifically its horizontalization, with private companies increasingly ruling on the permissibility levels of speech, placing the right to free speech at peril. To elucidate issues at stake, the paper will look at the meaning of hate speech, the online landscape in terms of the prevalence and removal of hate speech and recent legal and policy developments in the sphere of private regulation in Europe, critically weighing up the pros and cons of this strategy. This paper demonstrates how seeking to tackle all types of hate speech through enhanced pressures on intermediaries to remove content may come with dire effects to both freedom of expression and the right to non-discrimination. At the same time, due attention must be given to speech which may actually lead to real world harm. A perfect solution is not available since, as is the case in the real world, the Internet cannot be expected to be perfect. However, at the very least, the principles and precepts of IHRL and the thresholds attached to Article 20(2) ICCPR, as further interpreted by the Rabat Plan of Action, must inform and guide any effort in enhanced platform liability.
Keywords: Hate speech; Internet intermediaries; social media platforms; freedom of expression.
An Overview of the Proposed Cypriot Distributed Ledger Technology Law of 2021 Sotiris Paphitis, pp. 269-281 Full text PDF The Cypriot Ministry of Finance published in September 2021 a bill on a proposed Distributed Ledger Technology Law which aims to incorporate blockchain technologies, including tokens and smart contracts into the Cypriot legal system. This piece provides the reader with a synopsis of the main provisions of the bill and what their effect could be once adopted. A brief analysis is also provided with regard to whether the proposed legislation achieves its goals of facilitating the proper use of such technologies whilst contributing to the prevention and suspension of money laundering and guaranteeing consumers’ rights, all in a manner that is technologically neutral so that it does not obstruct the further development, and incorporation into the local legal system, of distributed ledger technologies.
Keywords: Blockchain; smart contracts; distributed ledger technology; proposed legislation Popular articles this month Algorithmic Colonization of Africa Abeba Birhane Copyright in AI-generated works: Lessons from recent developments in patent law Rita Matulionyte and Jyh-An Lee El derecho penal y la pornografía infantil en el derecho comparado a nivel internacional, de Argentina, Estados Unidos y Europa Fernando J. Barrio and Maria Cecilia Sarricouet Territoriality in Intellectual Property Law: Examining the Tension between Securing Societal Goals and Treating Intellectual Property as an Investment Asset Emmanuel Kolawole Oke How Will the EU Digital Services Act Affect the Regulation of Disinformation? Sharon Galantino Home About SCRIPTed Submission Guidelines Blog Books for Review Sitemap Contact us Recent blog posts Timed influence: The future of Modern (Family) life and the law September 10, 2021 Predicting Innovation: Why Facebook/WhatsApp Merger Flunked April 1, 2021 Georgia vs. Public.Resource.org: The Morning After June 15, 2020 Online harms and Caroline’s Law – what’s the direction for the law reform? April 13, 2020 SCRIPTed is turning 15! January 14, 2019 Issues About SCRIPTed Submission Guidelines Blog Sitemap Contact us
" |
184 | 2,016 | "Issue 13:3 (232-409) – SCRIPTed" | "https://script-ed.org/archive/volume-13/issue-133-232-409" | "A Journal of Law, Technology & Society https://script-ed.org?p=3173 SCRIPTed A Journal of Law, Technology & Society Menu About SCRIPTed Editorial Board Submission Guidelines Authors Artists Book Reviewers Blog posts Copyright policy Books for Review Issues Current Issue All issues Blog Contact us All issues > Volume 13 > Issue 13:3 (232-409) (2016) 13:3 SCRIPTed 232–409 Issue DOI: 10.2966/scrip.130316 Cover image Max Mitscherlich The City of Edinburgh inspired me to take this picture. Right after sunset I was standing on Calton hill and observed the pulsating city life. The diversity between old buildings, nature and modern technology is what fascinates me about this city.
Editorial Looking Back, Looking Forward Edward S. Dove and Catriona McMillan, pp. 232-234 Full text PDF Articles Decentralisation, Distrust & Fear of the Body – The Worrying Rise of Crypto-Law Alan Cunningham, pp. 235-257 Full text PDF The increasing collective use of distributed application software platforms, programming languages and crypto-currencies around the blockchain concept for general transactions may have radical implications for the way in which society conceptualises and applies trust and trust-based social systems such as law. By exploring one iteration of such generalised blockchain systems – Ethereum – and the historical lineage of such systems, it will be argued that indeed their ideological basis is largely one of distrust, decentralisation and, ultimately, via increasing disassociation of identity, a fear of the body itself. This ideological basis can be reframed as a crypto-legal approach to the problems of human interaction, one whereby the purely technological solutions outlined above are considered adequate for reconciling many of the problems of our collective existence. The article concludes, however, by re-iterating a perspective of law more so as an entirely embodied and trust dependent notion. These aspects go some way to explaining the necessarily centralised role it takes on within societies. They also explain why the crypto-legal approaches advanced by systems like Ethereum – or even the co-opting of blockchain technology by law firms themselves – will only ever be at best efficiency exercises concerned with the processing of data relating to legal affairs, and not the more radical, ambiguous and difficult process of actual legal thought or, indeed, engagement with trust.
Incident Response: Protecting Individual Rights Under the General Data Protection Regulation Andrew Cormack, pp. 258-282 Full text PDF Identifying and fixing problems with the security of computers and networks is essential to protect the data they contain and the privacy of their users. However, these incident response activities require additional processing of personal data, so may themselves create a privacy risk. Current laws have created diverse interpretations of this processing – from encouragement to prohibition – creating barriers to incident response and challenges for collaboration between incident responders. The EU’s new General Data Protection Regulation explicitly recognises the need for processing to protect the security of networks and information. It also, through rules on processing for “legitimate interests”, suggests a way to identify an appropriate balance between risks. Consistent use of these provisions could provide a common legal approach for incident response teams, enabling them to work more effectively. This article builds on analysis by the Article 29 Working Party to develop a framework for assessing the benefit and impact of incident response activities. This is applied to a range of practical detection, notification and information sharing techniques commonly used in incident response, showing how these do, indeed, protect, rather than threaten, the privacy and data protection rights of computer and network users.
Artificial Intelligence and Intellectual Property essay competition Editorial: The Future of IP Law in an Age of Artificial Intelligence Burkhard Schafer, pp. 283-288 Full text PDF Human Aspects of Digital Rights Management: the Perspective of Content Developers Marcella Favale, Neil McDonald, Shamal Faily, and Christos Gatzidis, pp. 289-304 Full text PDF Legal norms and social behaviours are some of the human aspects surrounding the effectiveness and future of DRM security. Further exploration of these aspects would help unravel the complexities of the interaction between rights protection security and law. Most importantly, understanding the perspectives behind the circumvention of content security may have a significant impact on DRM effectiveness and acceptance at the same time. While there has been valuable research on consumer acceptability, (The INDICARE project, Bohle 2008, Akester 2009) there is hardly any work on the human perspective of content creators. Taking video games as a case study, this paper employs qualitative socio-legal analysis and an interdisciplinary approach to explore this particular aspect of content protection.
Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent Law Erica Fraser, pp. 305-333 Full text PDF The nascent but increasing interest in incorporating Artificial Intelligence (AI) into tools for the computer-generation of inventions is expected to enable innovations that would otherwise be impossible through human ingenuity alone. The potential societal benefits of accelerating the pace of innovation through AI will force a re-examination of the basic tenets of intellectual property law. The patent system must adjust to ensure it continues to appropriately protect intellectual investment while encouraging the development of computer-generated inventing systems; however, this must be balanced against the risk that the quantity and qualities of computer-generated inventions will stretch the patent system to its breaking points, both conceptually and practically. The patent system must recognise the implications of and be prepared to respond to a technological reality where leaps of human ingenuity are supplanted by AI, and the ratio of human-to-machine contribution to inventive processes progressively shifts in favour of the machine. This article assesses the implications on patent law and policy of a spectrum of contemporary and conceptual AI invention-generation technologies, from the generation of textual descriptions of inventions, to human inventors employing AI-based tools in the invention process, to computers inventing autonomously without human intervention.
Artificial Invention: Mind the Machine! Shamnad Basheer, pp. 334-358 Full text PDF This script is a work of pure fiction intended to serve an educational purpose. Though it substitutes for a law review article in terms of format, it attempts to highlight the key arguments on the topic with appropriate references, where applicable. Much like an original piece of scholarship, it also advances some novel arguments in the form of tentative theses.
Analysis Data Localisation and the Balkanisation of the Internet Erica Fraser, pp. 359-373 Full text PDF Unrestricted international data flow is of critical importance to economies and people globally. Data localisation requirements interrupt the global flow of data by restricting where and how they may be stored, processed or transferred. Governments are increasingly imposing such requirements to protect the individual rights of their citizens, along with sentiments of national sovereignty and aspirations of economic benefit. However, data localisation requirements are likely to lead to the balkanisation of the Internet, which may threaten those very objectives. This Analysis article provides and introduction to and an overview of the likely advantages and drawbacks of data localisation requirements following the Snowden revelations. Economic, security and individual rights questions are addressed and illustrated with the recent Russian data localisation law.
Reports Conference Report: Liminal Spaces Symposium at the IAB 2016: What Does it Mean to Regulate in the Public Interest? Annie Sorbie, pp. 374-381 Full text PDF This Conference Report summarises a Wellcome Trust-sponsored symposium held at the 13th World Congress of the International Association of Bioethics, held in Edinburgh 14-17 June 2016 (IAB2016). This symposium was curated by the Liminal Spaces Project, which is conducted under the auspices of the JK Mason Institute for Medicine, Life Sciences and the Law at the University of Edinbugh School of Law, and sought to address the question: “What does it mean to regulate in the public interest?” Book reviews Privacy Revisited: A Global Perspective on the Right to be Left Alone Jiahong Chen, pp. 382-386 Full text PDF Surveillance Futures: Social and Ethical Implications of New Technologies for Children and Young People Joseph Savirimuthu, pp. 387-392 Full text PDF Rethinking Cyberlaw: A New Vision for Internet Law Joseph Savarimuthu, pp. 393-397 Full text PDF Cyber Law in Ireland TJ McIntyre, pp. 398-400 Full text PDF Medical Law and Ethics, 6th Edition Edward S. Dove, pp. 401-404 Full text PDF Patents, Human Rights and Access to Science Edward S. Dove, pp. 405-409 Full text PDF Popular articles this month Algorithmic Colonization of Africa Abeba Birhane Copyright in AI-generated works: Lessons from recent developments in patent law Rita Matulionyte and Jyh-An Lee El derecho penal y la pornografía infantil en el derecho comparado a nivel internacional, de Argentina, Estados Unidos y Europa Fernando J. Barrio and Maria Cecilia Sarricouet Territoriality in Intellectual Property Law: Examining the Tension between Securing Societal Goals and Treating Intellectual Property as an Investment Asset Emmanuel Kolawole Oke How Will the EU Digital Services Act Affect the Regulation of Disinformation? Sharon Galantino Home About SCRIPTed Submission Guidelines Blog Books for Review Sitemap Contact us Recent blog posts Timed influence: The future of Modern (Family) life and the law September 10, 2021 Predicting Innovation: Why Facebook/WhatsApp Merger Flunked April 1, 2021 Georgia vs. Public.Resource.org: The Morning After June 15, 2020 Online harms and Caroline’s Law – what’s the direction for the law reform? April 13, 2020 SCRIPTed is turning 15! January 14, 2019 Issues About SCRIPTed Submission Guidelines Blog Sitemap Contact us
" |
185 | 2,023 | "How Will the EU Digital Services Act Affect the Regulation of Disinformation? – SCRIPTed" | "https://script-ed.org/article/how-will-the-eu-digital-services-act-affect-the-regulation-of-disinformation" | "A Journal of Law, Technology & Society https://script-ed.org?p=4119 SCRIPTed A Journal of Law, Technology & Society Menu About SCRIPTed Editorial Board Submission Guidelines Authors Artists Book Reviewers Blog posts Copyright policy Books for Review Issues Current Issue All issues Blog Contact us All issues > Volume 20 > Issue 1 > How Will the EU Digital Services Act Affect the Regulation of Disinformation? Volume 20 , Issue 1 , February 2023 How Will the EU Digital Services Act Affect the Regulation of Disinformation? Sharon Galantino* Download PDF © 2023 Sharon Galantino Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Abstract This article examines the self-regulatory framework established by the EU Code of Practice on Disinformation and considers how the EU Digital Services Act [DSA] will affect that framework. Firstly, this article argues that the DSA entrenches the opacity of firms’ partnerships with fact-checking organisations and investigations of coordinated inauthentic behaviour, as well as fails to provide adequate transparency of its newly created redress mechanisms. Secondly, this article argues that, overall, the DSA fails to protect European standards of freedom of expression in the regulation of disinformation, reflecting an uncertainty of how public bodies should regulate the private gatekeepers of information. As these public bodies press private actors to address disinformation—lawful if undesirable expression—the question of the effect of informal state pressure on the horizontal application of fundamental rights gains a sense of urgency.
Keywords Digital Services Act, Code of Practice on Disinformation, platform governance, disinformation, freedom of expression Cite as: Sharon Galantino, "How Will the EU Digital Services Act Affect the Regulation of Disinformation?" (2023) 20:1 SCRIPTed 89 https://script-ed.org/?p=4119 DOI: 10.2966/scrip.200123.89 * Sutherland School of Law, University College Dublin, Dublin, Ireland, sharon.galantino@ucdconnect.ie 1 The Challenge of Disinformation Historically, the term ‘disinformation’ referred to the escalating information operations conducted by the United States and Soviet Union over the course of the twentieth century. In this Cold War context, disinformation was the discipline of state agents who weaponised both facts and falsehoods to stoke tensions and influence popular opinion within and beyond their rival’s public. It was the work of CIA agents who launched balloons with leaflets over the Iron Curtain as well as the efforts of KGB agents who promoted the conspiracy of a US-manufactured AIDS pandemic.
[1] Today, the term ‘disinformation’ typically refers to many tangled phenomena. For its part, the European Commission emphasises three general characteristics of the contemporary information environment when it talks about ‘disinformation’. First, the Commission acknowledges that state actors still weaponise false or misleading information, but new actors have entered this space, many lured by financial gain.
[2] Second, technologies that facilitate instant communication, pseudonymity, amplification, and targeting present a substantial risk for false or misleading information to cause public harm.
[3] And third, the diminished influence of traditional journalism outlets and low levels of digital media literacy make mitigation of disinformation more difficult.
[4] By emphasising these characteristics, the Commission has charted a policy strategy that prioritises coordinated intelligence, enhanced scrutiny of monetized content, and the availability of relevant and reliable information.
[5] These observations and resultant priorities appear relatively straightforward. Nevertheless, they obscure an ever-expanding range of behaviour and content the Commission seeks to regulate under what it now calls ‘the overarching term “disinformation”’.
[6] In its May 2021 ’Guidance on Strengthening the Code of Practice on Disinformation’, the Commission encouraged firms to adopt a wider view of ‘disinformation’ to include ‘disinformation in the narrow sense, misinformation, as well as information influence operations and foreign interference in the information space, including from foreign actors, where information manipulation is used with the effect of causing significant public harm’.
[7] From relatively straightforward priorities, convoluted proposals one day come.
While the Commission attempts to parse ‘disinformation’ into regulable elements, elsewhere the term is used as a simple rhetorical weapon. US politicians and pundits notably favour the epithet to discredit and dismiss opponents and opposing views. Fox News pundit Tucker Carlson, for example, warned that ‘CNN itself has become a disinformation network far more powerful than QAnon’.
[8] And from the halls of the US Capitol Building, then-House Speaker Nancy Pelosi criticised the President and the White House Coronavirus Response Coordinator for ‘spreading disinformation’ about COVID-19.
[9] In these examples, the term ‘disinformation’ is inseparable from the speakers’ political perspectives: the speakers draw an ambiguous distinction between themselves and their targets and, in drawing that distinction, cynically exempt themselves from further debate. In short: You’re wrong and the discussion is over.
‘The term has always been political and belligerent,’ observed technology reporter Joseph Bernstein in a provocative essay on ‘Big Disinfo’, his term for the burgeoning counter-disinformation industry.
[10] In it, Bernstein argues that ‘Big Disinfo’’s singular focus on technology firms as the cause of ‘disinformation’ shields other powerful actors—politicians and legacy media, in particular—from scrutiny.
[11] Viewed in this light, the Commission’s response to the challenge of disinformation—define it, refine it, and encourage technology firms to referee it—either reflects a narrow appreciation of the political nature of disinformation, or an expansive view of the Commission’s own political future.
2 The EU-Level Response to Disinformation Following the Brexit referendum and the 2016 US presidential election, allegations of disinformation took centre stage. News reports exposed the Facebook-Cambridge Analytica scandal at the same time social media executives appeared before the US Congress to testify on Russian electoral interference.
[12] With the 2019 European elections on the horizon, responding to the threat of disinformation became a matter of urgency for the Commission.
[13] That urgency led the Commission to develop a voluntary framework of industry self-regulation known as the ‘EU Code of Practice on Disinformation (2018)’.
[14] The original technology firm signatories to the Code were Facebook, Google, Twitter, and Mozilla.
[15] They were followed by Microsoft in May 2019 and TikTok in June 2020.
[16] The Code defines ‘disinformation’ and sets out five broad commitments for signatories: improve scrutiny of advertisements, ensure public disclosure of political and issue-based ads, ensure the integrity of their services, empower users of their services, and empower the research community. Signatories commit to prepare an annual self-assessment report of their counter-disinformation measures.
The Code is situated within a wider pattern of EU-level promotion of industry self-regulation of online speech. The Code, however, is unique among these frameworks because it aims to address lawful speech. Moreover, many of the broader objectives of the Code are complemented by EU legislation on data protection and audiovisual media services as well as Member State electoral and media laws. The Code is generally not complemented by disinformation-specific legislation in Member States, though laws in Germany and France stand out as exceptions.
In practice, the Commission’s strategy to mobilise private actors to regulate online speech reflects the challenges authors have identified with gatekeeper regulation—namely, alignment of gatekeepers’ information-control processes with fundamental rights. On this point, there are only a limited number of judgments from Member State courts—Germany, Italy, and the Netherlands—which acknowledge the horizontal effect of fundamental rights on the private contracts between users and technology firms. It remains unclear whether courts in other Member States will develop similar responses to private content moderation decisions.
2.1 Self-Regulation: EU Code of Practice on Disinformation (2018) The Commission took its first steps toward a Code of Practice in January 2018 when it established a high-level group of 39 experts from civil society, social media, news media, and academia to advise on responses to disinformation.
[17] After three months, all but one expert adopted a final report which, firstly, defined disinformation as ‘false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit’ and, secondly, suggested a ‘self-regulatory approach based on a clearly defined multi-stakeholder engagement process’.
[18] The lone holdout—the European Consumer Organisation—voted against the report because it lacked recommendations to combat ‘clickbaiting’ and to examine ‘the link between advertising revenue policies of platforms and dissemination of disinformation’.
[19] Indeed, efforts to discuss approaches which would examine the role firms’ business models play in the spread of disinformation were reportedly opposed by representatives from Facebook and Google.
[20] Following publication of the group’s report, the Commission convened a Multi-Stakeholder Forum on Disinformation comprised of two different and autonomous groups: a Working Group, made up of the major online platforms and advertising associations, and a Sounding Board, made up of representatives from media, civil society, fact-checking organisations, and academia.
[21] The Working Group prepared a draft Code of Practice on Disinformation, while the Sounding Board provided comments and advice.
By the final meeting of the two groups, there was tension in the room. According to the Sounding Board’s spokesperson, the Sounding Board could not support the Code because it ‘lack[ed] quantifiable KPIs [key performance indicators], include[d] vaguely-phrased commitments, and [had] no mechanism to ensure compliance’.
[22] Some members expressed a more fundamental concern: who decides what disinformation is? [23] Others considered that continued discussions with the Working Group ‘would not be worthwhile’, while a few Sounding Board members simply walked out of the meeting.
[24] In its unanimous final opinion, the Sounding Board observed that the Code ‘contains no common approach, no clear and meaningful commitments, no measurable objectives or KPIs, hence no possibility to monitor process, and no compliance or enforcement tool: it is by no means self-regulation, and therefore the platforms, despite their best efforts, have not delivered a Code of Practice’.
[25] Nevertheless, this final draft of the Code was signed by Facebook, Google, Twitter, Mozilla, and representatives of the advertising industry in October 2018. They were followed by Microsoft in May 2019 and TikTok in June 2020.
The Code defines ‘disinformation’ as ‘verifiably false or misleading information which, cumulatively, (a) is created, presented and disseminated for economic gain or to intentionally deceive the public; and (b) may cause public harm, intended as threats to democratic political and policymaking processes as well as […] the protection of EU citizens’ health, the environment or security’.
[26] Paolo Cesarini, a former senior Commission official, suggests that ‘the element of intentionality’ eliminates the risk of creating ‘judge[s] of truth’.
[27] This attractive explanation, however, overlooks one of the most obvious risks of the Code’s definition: it designates technology firms as judges of intent. Pielemeier, in an assessment of the Code’s definition, observes that discerning a speaker’s intent online can be incredibly difficult ‘where nuance, jargon, and slang—not to mention the use of different languages—proliferate’.
[28] That difficulty is compounded at scale: ‘A one-in-a-million chance [in content moderation] happens 500 times a day’, said Twitter vice president of Trust and Safety, Del Harvey, in 2014.
[29] Additionally, it is not clear how the Commission or signatories conceptualise the potential of disinformation to cause public harm. The Code itself offers no guidance except to equate public harm with threats to democratic processes, public health, the environment, and security. Chase points out that the Commission has only ever cited two sources of ‘essentially opinion- rather than evidence-based’ data to support an actual causal link between disinformation and public harm.
[30] They include a synopsis of nearly 3,000 public comments received during the public consultation phase, [31] as well as the results of a Eurobarometer poll related to trust in media, perceived exposure to disinformation, and perceived ability to identify it.
[32] As a practical matter, it may be difficult to establish and measure harm because, as Pielemeir notes, the impacts of disinformation will likely be more diffuse than, for example, terrorist incitement.
[33] Still, researchers are taking a bite at the apple. In 2020, Ben Nimmo, a journalist turned influence operations analyst, proposed a breakout scale for researchers to ‘compare the probable impact of different operations in real time’.
[34] The scale divides influence operations into six categories which are roughly defined by how many platforms a particular influence operation reaches.
[35] Translated to policymakers, the breakout scale suggests that, as an operation infiltrates more platforms, the risk of public harm increases.
The Code also sets out five broad commitments: improve the scrutiny of advertisements, ensure public disclosure of political and issue-based ads, ensure the integrity of their services, empower users of their services, and empower the research community. These commitments are further qualified by allowances for flexible uptake. Signatories need only sign up to the commitments which correspond with their services and technical capabilities. Moreover, on account of differences among signatories’ operations, purposes, technologies, and audiences, the Code ‘allows for different approaches to accomplishing the spirit’ of the commitments.
[36] Chase speculates that the Commission acted more like a facilitator, not a negotiator, in this context because disinformation, unlike hate speech, is not illegal.
[37] A common criticism of the Code is that it generally lacks ambition. As Taylor et al observe, the Code is a mirror image of signatories’ existing policies and current initiatives, [38] particularly its ‘Annex of Best Practices’ which links to various community rules and announcements of the original signatories.
[39] The European Regulators Group for Audiovisual Media Services (ERGA) has criticised the commitments for creating ‘space for the signatories to implement measures only partially or, in some cases, not at all’.
[40] For example, signatories follow different approaches to the identification and disclosure of issue-based ads, perhaps owing to the lack of an agreed-upon definition or understanding of ‘issue-based advertising’.
[41] As of October 2019, Facebook was the only signatory with a policy on issue-based ads applicable across the EU, while Twitter’s policy, which included a certification mechanism, applied only to the US (with the exception of application in a single Member State, France).
[42] Moreover, despite the Code’s call for protection of fundamental rights, it fails to put forward any measures to do so.
[43] For example, there are no Code commitments to introduce appeal mechanisms for account sanctions or removals. But access to fair processes may do more to legitimise the regulation of disinformation than reports of content removals ever can. Marsden et al note that ‘a very important factor in accountability for legal content posted may be examples of successful appeals to put content back online’.
[44] Other criticisms take aim at the Code’s casual reporting requirements.
[45] To monitor the Code’s effectiveness, signatories commit to write annual self-assessment reports to be made publicly available and subject to review by a third-party organisation. But these reports, which firms typically organise around their chosen commitments, tend to use the informal language and selective presentation of data commonly found in corporate press releases. For example, in its 2019 annual self-assessment, Twitter describes its efforts to protect the integrity of its service by listing six statistics whose accuracy and significance are unverifiable. Among them: ‘2.5 times more private information removed with a new, easier reporting process’ and ‘100,000 accounts suspended for creating new accounts after a suspension during January – March 2019, a 45% increase from the same time last year’.
[46] In short, the logic of the Code’s reporting and monitoring process is an honour system.
In light of the Code’s imprecise commitments, allowance for flexible uptake, and lax reporting, it is reasonable to expect oversight challenges and poor outcomes. Indeed, after the Code’s first year, ERGA found it was not possible to assess implementation of three of the five commitments—improve the scrutiny of advertisements, ensure public disclosure of political and issue-based ads, and ensure integrity of services—because the data provided was completely inadequate for monitoring compliance.
[47] Signatories’ commitment to empower users produced mixed results. ERGA found that some firms made use of tools like labels and links to trustworthy information, but those tools were not available across all Member States and firms did not provide any data on their use.
[48] In addition to developing user interface tools, several signatories participate in media literacy campaigns. However, ERGA found that those campaigns typically ‘involve only a tiny fraction of the total population (mainly journalists, politicians, and school teachers)’ and are concentrated in major cities.
[49] In light of signatories’ reluctance to share data—as well as the Code’s presumption of media literacy campaigns’ effectiveness—signatories could easily comply with their commitment to empower consumers by making further investments in these campaigns. Striking a cautiously optimistic note, Butcher notes that the Code’s ‘most important work lies in its long-term measures to increase societal resilience to disinformation’, particularly investments in media literacy.
[50] The research community did not fare any better under the Code. Here, ERGA found that firms developed a variety of relationships with fact-checking organisations, including contracting directly (Facebook), providing technical support (Google), or not officially supporting them at all (Twitter).
[51] Where signatories work with fact-checking organisations, it is unclear whether and how the firms used fact-checkers’ assessments.
[52] Not only are fact-checkers kept in the dark, as reported by Ananny in 2018, [53] but also the researchers responsible for assessing compliance with the Code. At the Member State-level, for example, Teeling and Kirk were unable to assess the extent to which Facebook’s partnerships with fact-checking organisations reduced distribution of false news in Ireland because the data to make those assessments were not available to them.
[54] On the whole, ERGA found that researchers continue to face ‘enormous difficulties’ gaining access to data, particularly ‘crucial data points’ on ad targeting and user engagement with disinformation.
[55] Notably, researchers reported that the ad libraries created by Facebook, Google, and Twitter in response to the Code ‘were inadequate to support in-depth systematic research into the spread and impacts of disinformation in Europe’.
[56] While many researchers are concerned with the absence of audience targeting data in the ad library, there are also reports of the libraries’ incomplete data, [57] limited search functions, [58] and mysteriously vanishing political ads.
[59] 2.2 EU-Level Response to Disinformation in Context 2.2.1 Industry Self-Regulation in the EU The Code is situated within a wider pattern of EU-level promotion of industry self-regulation of online speech. It joins the ‘Code of Conduct on Countering Illegal Hate Speech (2016)’ and the Commission’s ‘Recommendation on Measures to Effectively Tackle Illegal Content Online (2018)’, each of which complements national legislation restricting these forms of expression to various degrees.
[60] All three of these instruments, observes Kuczerawy, are forms of ‘delegated private enforcement’, which tends to be ‘less visible and less obvious’ than direct state intervention.
[61] The Code of Practice on Disinformation, however, is unique among these frameworks because it seeks to address lawful speech such as false news articles, conspiracy theories, and hyper-partisan rhetoric.
[62] This speech should be moderated, according to the Commission, because it may cause harm to personal and public health, crisis management, the economy, and even social cohesion.
[63] Viewed in this light, the Code bears out Lessig’s warnings about public bodies’ indirect use of ‘code as law’: broadly, the Commission ‘gets the benefit of what would clearly be an illegal and controversial regulation without even having to admit any regulation exists’.
[64] 2.2.2 EU Legislation Many of the broader objectives of the Code are complemented by EU legislation, including the General Data Protection Regulation (GDPR) and the Audiovisual Media Services Directive (AVMSD).
[65] For example, the Code calls on signatories to ensure transparency of political and issue-based ads, while the GDPR, applied in an electoral context, addresses microtargeting of voters based on unlawful processing of personal data.
[66] The Code and the GDPR, Nenadić notes, form a ‘European approach’ to tackling the particular challenge of social media manipulation during elections.
[67] Further, the Code calls on signatories to partner with civil society, governments, and educational institutions to support efforts to improve digital media literacy. On this front, signatories collaborate with fact-checking organisations, [68] distribute grants to media literacy organisations, [69] and work on Member State-level media literacy projects.
[70] These activities are one part of a wider European effort (see, for example, the AVMSD) to equip citizens with the skills required ‘to exercise judgment, analyse complex realities and recognise the difference between opinion and fact’.
[71] 2.2.3 Member State Electoral and Media Laws In addition to EU legislation, the Code is complemented by Member State electoral and media laws. Generally, these laws set the ground rules for political advertising on broadcast media during campaign periods, including who may advertise, when, and how much money may be spent. These rules, however, are not harmonised across Europe, nor are they necessarily applicable to online political advertising. For example, in a 2020 comparative study on the regulation of political advertising in the EU, Furnémont and Kevin found that France prohibits online advertising during election periods, Ireland does not specifically regulate online political advertising, and Italy promotes self-regulatory guidelines for equal access to online platforms during election campaigns.
[72] Presently, Member States are considering a proposal for a political advertising regulation put forward by the Commission in late 2021 to harmonise rules across the Union and establish a high level of transparency.
[73] 2.2.4 Member State Disinformation Laws With a few notable exceptions, the Code is generally not complemented by disinformation-specific legislation in Member States. One exception is Germany’s Network Enforcement Act (NetzDG), adopted in 2017, which requires social media platforms to remove ‘clearly illegal’ content within 24 hours of receipt of a user complaint.
[74] Categories of illegal content—including the dissemination of certain propaganda, commission of forgery, and incitement to crime and hatred—are set out in separate statutes. Germany’s approach, Butcher observes, bundles disinformation into hate speech law.
[75] Critics of the law argue that it incentivises platforms to remove reported content because they must operate within the law’s tight 24-hour deadline or face heavy fines.
[76] Moreover, the law fails to provide for judicial oversight or right to appeal.
[77] Another exception is France’s Law 2018-1202, adopted in 2018, ‘on the fight against the manipulation of information’.
[78] The law allows the public prosecutor, any candidate, any party or political group, or any interested person to apply to a judge for an order requiring platforms to take ‘proportionate and necessary measures’ to stop the ‘deliberate’ dissemination of ‘inaccurate or misleading allegations of fact likely to alter the sincerity of the […] ballot in the three months preceding general elections.
[79] After receiving the application, the judge has 48 hours to issue a decision.
[80] Examining this procedure, Craufurd Smith argues that to establish the subjective intent of the originator, or even re-publishers, will prove ‘all but impossible, certainly in the relevant time-frame for action’.
[81] Instead, applicants will have to demonstrate the ‘manifest falsity’ of the information, from which the originator’s intent may be inferred.
[82] 2.2.5 Horizontal Effect of Fundamental Rights The Code, which is premised on the ability of technology firms to control information, reflects a network gatekeeper theory of regulation.
[83] Network gatekeepers, according to Barzilai-Nahon, are those entities with the discretion to engage in information-control processes (e.g., selecting, channelling, withholding, timing, and deleting), which they carry out via information-control mechanisms.
[84] For example, signatories’ commitment to enforce policies on identity is premised on their ability to suspend or terminate user accounts. But these processes, Laidlaw notes, pose risks to fundamental rights.
[85] These rights are enforceable vertically against the state but generally unenforceable horizontally against the private firms at the frontlines of enforcement.
There are only a limited number of judgments from Member States’ courts which acknowledge the horizontal effect of fundamental rights on the private contracts between users and technology firms. Moreover, within this limited case law, the complainants are political parties or elected officials. For example, Kettemann and Tiedeke describe cases in Germany and Italy where courts applied ‘public law in private spaces’ to reinstate the Facebook accounts of right-wing political parties.
[86] In both cases, the horizontal application of public law principles (in Germany, equality before the law; in Italy, the right to political participation) to private contracts was supported by the courts’ findings that Facebook had become an essential platform to disseminate political messages.
[87] Where courts have acknowledged the horizontal effect of freedom of expression on these private contracts, however, they do not emphasise that access to the platform is essential to participate in public discourse. For example, a district court in the Netherlands considered whether LinkedIn’s suspension of a Member of Parliament’s account and removal of his posts for running afoul of the company’s public health disinformation policies violated his right to freedom of expression.
[88] Weighing the MP’s freedom of expression against the importance of protecting public health, the court emphasised the obligations of elected officials in the context of a public health pandemic: criticism of public policies is a legitimate exercise of freedom of expression, while criticism which undermines such policies is not.
[89] The court ordered LinkedIn to restore the MP’s account, but not the removed posts.
Overall, it is unclear how this case law will develop in Germany, Italy, and the Netherlands, and whether other Member States which recognise the horizontal effect of fundamental rights will follow a similar pattern. On this latter point, there is reason to doubt uniform national responses. TJ McIntyre, for example, describes a similar lack of clarity in Ireland where the law also recognises the horizontal effect of fundamental rights.
[90] Drawing on Irish case law in the public broadcasting context, McIntyre suggests that ‘Irish courts would be reluctant to develop a “must carry” rule which second guessed the policies of platforms’.
[91] 2.3 Criticisms of the EU Code of Practice on Disinformation 2.3.1 An Open-Ended Definition of Disinformation The Code defines ‘disinformation’ as ‘verifiably false or misleading information which, cumulatively, (a) is created, presented and disseminated for economic gain or to intentionally deceive the public; and (b) may cause public harm, intended as threats to democratic political and policymaking processes as well as […] the protection of EU citizens’ health, the environment or security’.
[92] By this definition, which lacks a legal basis, false or misleading information becomes ‘disinformation’ through its interaction with ‘bad actors’: those who disseminate it for economic gain or to deceive. It lays the groundwork for firms to regulate disinformation as a problem of bad behaviour, bypassing the more problematic burden of becoming arbiters of truth.
This open-ended definition of ‘disinformation’ is a politically convenient regulatory trapdoor. It is subject to revision at the Commission’s behest, enforced at the whims of firms on the frontline who shape the definition to suit their own operational and financial needs, and it is shielded from both democratic deliberation and judicial review.
The Commission has begun to call for more nuanced definitions of the challenges associated with disinformation. Citing the COVID-19 ‘infodemic’, the Commission pointed to a need to ‘differentiate more precisely between various forms of false or misleading content and ‘manipulative behaviour’.
[93] This echoes policy recommendations made by Chase who emphasises the need to distinguish between disinformation as ‘pieces of content’ and disinformation as ‘disruptive campaigns’.
[94] Dittrich acknowledges this distinction as well, but pushes back on its use to broaden the scope of enforcement: ‘[…] the EU should refrain from mandating [firms] to police content directly’ and instead ‘should focus on how [firms] tackle two main drivers of the spread of disinformation, namely fake accounts and inauthentic behavior’.
[95] Nevertheless, in May 2021, the Commission called for expanding the scope of enforcement measures against misinformation, disinformation, influence operations, and foreign interference.
[96] Indeed, the Code and its open-ended definition have the appearance of a repository for European security, electoral, and media policies which cannot survive public or legal scrutiny, or simply lack priority.
2.3.2 Private, Ad Hoc Regulatory Tools Lacking Meaningful Transparency Signatories are given wide discretion to meet their commitments, resulting in the development of private, ad hoc counter-disinformation tools which lack meaningful transparency. They include tools of standard-setting which address false or misleading information, inauthentic representation, and manipulative behaviour.
[97] Disinformation standards, however, are routinely criticised as unclear, unstable, and inconsistent across platforms.
[98] They also include tools of human detection and evaluation. In the context of disinformation, this is the work of fact-checking organisations and internal investigative teams. Fact-checking organisations, however, do not have broad coverage across Member States.
[99] Moreover, they have evolved ‘highly diversified working practices’ [100] and very little is known about how signatories select claims for fact-checkers or how signatories translate the outputs of fact-checkers into indicators of relevance, authenticity, and authority to prioritise information.
[101] Internal investigative teams participate in ongoing monitoring of users suspected of ‘inauthentic’ or ‘manipulative’ behaviour.
[102] Their work, however, is not subject to investigative transparency. They publicly report very little detail about how their internal systems flag suspected inauthentic behaviour or the duration of their monitoring activities. Typically, they voluntarily disclose the number of coordinated inauthentic accounts they have terminated as well as the accounts’ affiliations with state or non-state actors.
[103] Finally, signatories apply tools of enforcement to violations. In the context of disinformation, signatories typically apply sanctions to behavioural infractions (e.g., account terminations and suspensions) and lighter touch enforcement mechanisms to content (e.g., recommendation and contextualisation).
[104] Account sanctions, however, are not always accompanied by a clear explanation to the affected user, while tools of recommendation and contextualisation, which rely on the work of fact-checking organisations, lack meaningful transparency.
[105] Moreover, little is known about whether tools of recommendation and contextualisation actually succeed in countering disinformation.
[106] 2.3.3 Lack of Effective Redress Possibilities Users whose accounts are sanctioned, or whose content is removed, lack effective possibilities for redress. While there is a limited sample of cases from Germany and Italy where courts have given horizontal effect to fundamental rights of equal treatment and participation in order to restore users’ access to Facebook—because of the platform’s ‘significant market power’ (Germany) and its ‘systemic relevance [to] political participation’ (Italy)—national courts, on the whole, have not recognised the horizontal effect of freedom of expression in order to reinstate content.
[107] Germany’s Federal Court of Justice has begun to address this gap in protection by applying a consumer protection framework to platform sanctions. In a July 2021 judgment, the court gave horizontal effect to freedom of expression in a consumer protection context when it considered the reasonableness of Facebook’s terms of service related to deletion of content and blocking of user accounts.
[108] First, the court recognised that a platform is entitled to set rules on permissible speech that go beyond criminal prohibitions as well as to remove content or block users when those rules are violated. However, the court observed, a platform’s terms of service, in practice, must reflect an appropriate balance between a user’s freedom of expression and a platform’s freedom to pursue an occupation.
[109] Applying that reasoning to Facebook’s terms of service, the court held that the platform’s deletion of content must be accompanied by notices to the user, at least after the fact, while the platform’s blocking of a user’s account must be accompanied by advance notice to the user.
[110] While this case reemphasises the importance of access to a platform’s service, it also makes clear the limitations of a consumer protection framework to safeguard freedom of expression: the court’s emphasis is on the fair application of the platform’s terms of service when removing content, rather than the congruence of the platform’s rules on permissible speech with principles of freedom of expression. In any event, safeguards for freedom of expression at the national level will continue to develop in an ad hoc manner, precluding adequate protection for freedom of expression across all Member States.
3 Co-Regulation Through the EU Digital Services Act Like the Code, the Digital Services Act reflects the EU-level trend of using firms as network gatekeepers to regulate online speech. This legislation, however, mobilises firms to address the criticisms of self-regulation by requiring them to adopt safeguards of transparency and redress. These due diligence obligations vary according to the function and size of the firm, though the vast majority of the obligations are addressed to ‘online platforms’, particularly ‘very large online platforms (VLOPs).
[111] An online platform, according to the DSA, is ‘a hosting service which, at the request of a recipient of the service, stores and disseminates to the public information’.
[112] This describes the services provided by Code signatories Facebook, Twitter, and TikTok. As the platform’s user population expands, so too do its due diligence obligations.
[113] Online platforms with a population equivalent to 10% of the EU population are considered ‘very large online platforms’ (VLOPs) which must adopt additional due diligence obligations to manage the systemic risk of disinformation.
[114] Overall, the DSA provides modest improvements to the transparency of counter-disinformation tools. On one hand, it improves the transparency of automatic detection and evaluation tools by requiring intermediaries to publish reports on the precise purposes of their use for content moderation, which must include ‘a qualitative description, a specification of the precise purposes, indicators of the accuracy and possible rate of error […], and any safeguards applied’.
[115] It also takes steps toward standardisation of online advertising transparency by requiring ‘clear, concise, and unambiguous’ advertisement labels as well as the development of an online advertising repository.
[116] In November 2021, the Commission published a proposal for a regulation on political advertising which will complement these provisions in the DSA.
[117] On the other hand, it is unclear to what extent the DSA reigns in signatories’ interpretation of ‘disinformation’. It may deliver transparency of signatories’ policies on coordinated inauthentic behaviour, but it likely will not shed light on how signatories determine indicators of relevance, authenticity, and authority. Moreover, the DSA fails to address the transparency of the human content moderation behind disinformation—namely, fact-checking organisations and internal security teams. This preserves the opacity of recommender systems, contextualisation tools, and the regulation of coordinated inauthentic behaviour.
Finally, the DSA establishes a system of internal complaint-handling for platforms complemented by a system of independent, out-of-court dispute settlement bodies to resolve content moderation disputes.
[118] Each of these systems, however, places the burden on affected users to challenge content moderation decisions. This empowers platforms to act first and answer for it later, if at all. Nevertheless, the independent, out-of-court dispute settlement bodies have the potential to provide valuable feedback on the quality of a platform’s content moderation systems, as well as the clarity, application, and enforcement of disinformation standards.
3.1 Limited Restriction of Signatories’ Interpretations of ‘Disinformation’ To what extent does the DSA create safeguards against signatories’ interpretation of the Commission’s open-ended definition of ‘disinformation’? The DSA delivers transparency of a limited set of signatories’ disinformation standards. Article 12 requires intermediaries to publish ‘information on any restrictions that they impose […] in respect of information provided by [users] […] in clear, plain, intelligible, user friendly, and unambiguous language’.
[119] This is an ‘information obligation’ (i.e., policies must be clear and publicly available) limited to those disinformation policies which result in ‘restrictions’.
[120] ‘Restrictions’ most certainly include blunt enforcement mechanisms like account sanctions. But do they include more subtle enforcement mechanisms like recommendation and contextualisation which can produce restrictive effects? Even if they are included, Article 12 does not necessarily require signatories to go the extra step to disclose how they define indicators of relevance, authenticity, and authority which inform the use of these tools. Accordingly, the DSA promises transparency of prohibitions against coordinated inauthentic behaviour (routinely enforced through account sanctions), but does not deliver transparency of the indicators of relevance, authenticity, and authority which inform signatories’ enforcement through tools of recommendation and contextualisation.
3.2 Lack of Transparency of Signatories’ Partnerships with Fact-Checking Organisations Signatories, in varying degrees of coordination with fact-checking organisations, may continue to define ‘relevant, authentic, and authoritative’ information without meaningful transparency of these indicators. This entrenches the opacity of tools to recommend, or prioritise, information, as well as tools to contextualise information, from low-profile labels to conspicuous warnings requiring click-throughs. Although VLOPs must assess the risks of recommendation and contextualisation to freedom of expression, there is no express requirement anywhere in the DSA to disclose how relevance, authenticity, and authority are defined.
[121] Ultimately this is a matter of transparency of how signatories influence fact-checkers’ claim selection, as well as how signatories translate the outputs of fact-checkers’ into indicators of relevance, authenticity, and authority. Signatories may indirectly influence claim selection by filtering potential claims to fact-checkers, [122] and they may directly influence claim selection by placing certain claims off-limits as a matter of platform policy.
[123] In terms of translating the outputs of fact-checkers, signatories may use fact-checks to train machine learning or create warnings, rather than simply publishing fact-checks as written.
[124] The DSA fails to shed light on these workflows, precluding scrutiny of how indicators of relevance, authenticity, and authority are developed and deployed to prioritise and contextualise information.
3.3 Lack of Transparency of ‘Coordinated Inauthentic Behaviour’ Investigations Despite the promise of Article 12 to provide ‘clear, plain, intelligible, user friendly, and unambiguous standards’ for platform policies—which would include policies on coordinated inauthentic behaviour—the DSA mandates a lower standard of public investigative transparency than what signatories have historically voluntarily adopted. Under the DSA, the problem of intentional manipulation of services as well as practices of ongoing monitoring, are subject to a closed loop of transparency among VLOPs, the Commission, and Digital Services Coordinators.
Presently, signatories publicly report very little about how they detect suspected coordinated inauthentic behaviour. Facebook, for example, has variously referenced ‘internal investigations’, [125] ‘public reporting’ by news agencies [126] and fact-checking organisations, [127] the work of external researchers, [128] and reports from law enforcement [129] as the starting points for user monitoring. While no signatories report on the duration of their monitoring activities, each typically discloses the number of accounts terminated and the affiliation of the network of accounts with state or non-state actors on a monthly basis.
[130] The DSA does not appear to require public disclosure for much of this information. Where it does, it is limited to an annual or semi-annual basis. Article 13 requires intermediaries to report on their total number of account suspensions and terminations, categorised by the type of ‘violation of the terms and conditions […], by the detection method and by the type of restriction applied’.
[131] Accordingly, at least once per year (or every six months for VLOPs), intermediaries must publish the total number of account sanctions for violations of coordinated inauthentic behaviour (and related) policies. It is not clear that disclosure of the ‘detection method’ requires any more than reporting a distinction between automatic or human detection. Moreover, there is no requirement to attribute the operation (as many platforms do) or disclose the duration of monitoring activities (which no platforms have reported in the past).
Not even VLOPs’ additional transparency reporting requirements are likely to shed light on this ongoing monitoring. Article 26 requires VLOPs to assess the ‘systemic risks stemming from the design, […] functioning and use made of their services’, including the risk of ‘actual or foreseeable negative effects on civic discourse and electoral processes, and public security’.
[132] When making these assessments, VLOPs must analyse ‘whether and how [those risks] are influenced by intentional manipulation of their service, including by means of inauthentic use’.
[133] Although Article 33 requires VLOPs to publicly report the results of these risk assessments, it also allows them to remove information from these public reports, including information which ‘may cause significant vulnerabilities for the security of its service’ and information which ‘may undermine public security or may harm recipients’.
[134] At best, the DSA promises ‘comprehensive reports’ published by the Board, in cooperation with the Commission, which identify and assess ‘the most prominent and recurrent systemic risks reported by [VLOPs] or identified through other information sources’ as well as ‘best practices for [VLOPs] to mitigate the systemic risks identified’.
[135] 3.4 Inadequate Transparency of Redress Mechanisms While the DSA establishes a system of redress for content moderation decisions, the transparency requirements associated with this system are essentially limited to annual check-ins: Are the platforms settling content moderation disputes in a timely manner? Are their automated content moderation tools accurate? This precludes adequate oversight of the redress mechanisms which is needed to assess two important processes: whether platforms are making appropriate content moderation decisions and whether users are abusing the redress mechanisms. Although Article 23 of the DSA requires platforms to publicly disclose ‘without undue delay’ their initial content moderation decisions and statements of reasons, this does not address the full picture of content moderation practices, which may involve complaints based on those decisions and subsequent engagement with redress mechanisms.
Ultimately, the DSA fails to provide adequate transparency of users’ complaints and platforms’ resolutions to content moderation disputes. Article 13 requires intermediaries to report annually (VLOPs, semi-annually) on the number of complaints received through their internal complaint-handling systems, the basis of those complaints, decisions taken, median time to take a decision, and the number of instances where decisions were reversed.
[136] And Article 23 requires online platforms to report annually (VLOPs, semi-annually) on the number of disputes submitted to out-of-court dispute settlement bodies, the outcomes, median time for settlement, and share of disputes where the platform implemented the bodies’ decisions.
[137] These are important disclosures, but they do not fully address the risks presented by platforms empowered to remove content without warning or the risks of users abusing the complaint-handling system.
Firstly, the DSA should mandate disclosure, at the time a complaint is submitted, of the infringing material removed, date of removal, date of complaint, and basis of the complaint. This would facilitate detection of user abuse of the complaint-handling system at a time when intervention is most urgent. Secondly, the DSA should mandate disclosure, at the time a complaint is settled, of the decision of the platform (or settlement body), date of the decision, and the reason for the decision. This would facilitate more comprehensive oversight of the content moderation practices of platforms as well as early detection of the timeliness of the complaint-handling process. For content moderation decisions related to disinformation—where the Commission has called for greater consistency across platforms to reduce the risk of public harm—these additional transparency requirements could also facilitate oversight of content moderation consistency.
If the DSA mandated disclosure of these elements, the Commission could facilitate the creation of a database similar to Lumen, an independent third-party research project which publishes millions of voluntary submissions of online content complaints, particularly in the context of copyright infringements.
[138] These submissions include the date the complaint was submitted, the technology firm recipient of the complaint, the basis of the complaint, and the URL where the content is located.
The information in the Lumen database makes possible research into the content moderation practices of technology firms as well as detection of user abuse of notice-and-takedown systems. For example, after learning of a number of falsified court documents requesting the removal of content, Volokh relied on the resources in the Lumen database to access thousands of court orders submitted to Google and other search firms.
[139] Initially, Volokh reported that, over a period of four years, about 200 of 700 court orders submitted to Google were either forged, fraudulent, or highly suspicious.
[140] Taking these findings as a case study, Volokh published his observations about designing legal systems to manage the risk of fraud, including the roles of verification processes, deterrent measures, and enhanced public scrutiny.
[141] 4 Co-Regulation of Disinformation Through an Article 10 Lens Article 10 of the European Convention on Human Rights, which protects freedom of expression, is enforceable vertically against the state but unenforceable horizontally against private actors. In the context of regulating disinformation, it is unclear to what extent the content moderation practices of private firms are in fact compelled by public bodies, amounting to state action to suppress speech.
[142] Although the DSA requires firms to have ‘due regard’ to fundamental rights in their content moderation practices, this requirement is ambiguous: Some rights are named, while the door remains open to others (Article 12 points to ‘freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms […] in the Charter’), and it is unclear what is the practical effect of requiring firms to have ‘due regard’ to them.
[143] As Appelman et al asked, does ‘due regard’ effectively require horizontal application of certain fundamental rights between intermediaries and users? Because the DSA does not offer guidance on operationalising this provision, the authors note that ‘[it] might remain too vague to have real effect’.
[144] Nevertheless, the Commission convened technology firms to draw up a Code of Practice and it continues to evaluate their progress and recommend improvements. This encouragement of private regulation of speech should operate with attention to fundamental rights. To that end, Article 10 case law sheds light on the hazards created by the DSA as it relates to the regulation disinformation.
4.1 Insufficient Guarantees Against Abuse In the case of prior restraints, the Court has held that ‘a legal framework is required, ensuring both tight control over the scope […] and effective judicial review to prevent any abuse of power’.
[145] In Ekin Association v France , the Court considered a minister’s powers to impose ‘general and absolute bans throughout France on the circulation, distribution or sale of any document written in a foreign language or any document regarded as being of foreign origin, even if written in French’. Not only did the law fail to define ‘foreign origin’ or indicate the grounds on which such publications may be banned, but the application of the law produced results that were ‘at best surprising’ and in other cases ‘verge[d] on the arbitrary’.
[146] Moreover, because the administrative bans were subject to limited review only upon application by the affected party, the framework provided ‘insufficient guarantees against abuse’.
[147] The Court clarified this requirement for effective judicial review in Yildirim v Turkey where it described the need for ‘a weighing-up of the competing interests at stake’ in order to ‘strike a balance between them’.
[148] Nevertheless, without a framework that established ‘precise and specific rules regarding the application of preventive restrictions on freedom of expression’, effective judicial review was ‘inconceivable’.
[149] The framework set out by the Code fails to tightly control the scope of content removals and account sanctions (although content removal is not explicitly envisioned by the Code, it occurs in practice). The definition of ‘disinformation’ is subject to ongoing revision at the Commission’s behest and the DSA preserves this arrangement. Indeed, the Commission has called on signatories to update the definition of ‘disinformation’ to include ‘influence operations’ and ‘foreign interference’ with references to vague descriptions of these phenomena.
[150] Still, there is the possibility that the out-of-court dispute settlement bodies established by the DSA have the potential to facilitate ‘tight control’ over the scope of restrictions on disinformation. Article 18 empowers Member States to certify independent, out-of-court dispute settlement bodies to issue non-binding content moderation decisions. These decisions have the potential to perform a corrective function to align signatories’ disinformation policies with fundamental rights principles. Indeed, in its first annual report, the independent Oversight Board for Meta’s content moderation decisions disclosed that the company ‘either demonstrated implementation or reported progress’ for two-thirds of the Board’s non-binding recommendations.
[151] Moreover, content removals and account sanctions are only subject to review upon application by users to the internal complaint-handling system or an out-of-court dispute settlement body. This is an insufficient guarantee against abuse because it empowers signatories to act first to remove content or sanction user accounts, and answer for those decisions later, if at all.
4.2 Lack of Incentives to Avoid Indiscriminate Approaches to Disseminators of Disinformation While the definition of disinformation is restricted to actors with harmful intent, in practice it is applied indiscriminately against all users who share false or misleading information. The Court has held that ‘an indiscriminate approach to the author’s own speech and statements made by others is incompatible with the standards elaborated in the Court’s case law under Article 10’.
[152] This principle has been explored in the context of journalists’ reproduction of statements made by others. In several cases, the Court has held that a distinction must be made between statements emanating from a journalist and quotations of others because to punish a journalist for disseminating the quotations of others would seriously hamper discussion of matters of public interest.
[153] The state must advance ‘particularly strong reasons’ to do otherwise.
[154] Accordingly, any framework to regulate disinformation must distinguish between the actors with the intent to cause harm and those who lack such intent. Where an actor without the requisite intent reproduces disinformation, the platform must conduct a balancing exercise of the competing interests at stake in the context in which the disinformation was reproduced. Less intrusive restrictions on that actor should be considered (e.g., use of contextualisation tools).
[155] The Commission’s Guidance on Strengthening the Code sets out a definition of ‘misinformation’—’false or misleading content shared without harmful intent’—and calls on signatories ‘to have in place appropriate policies and take proportionate actions to mitigate the risks posed by misinformation, where there is a significant public harm dimension and with proper safeguards for the freedom of speech’.
[156] Appropriate actions including empowering users ‘to contrast this information with authoritative sources and be informed where the information they are seeing is verifiably false.
[157] But this guidance assumes that all contextualisation tools developed by signatories are created equal, that they are effective, and that they do not interfere with expression. Because there is so little known about these tools which continue to be experimented with, it is not possible to say that carving out ‘misinformation’ for proportionate responses will avoid indiscriminate approaches.
4.3 Failure to Mitigate the Risks of Wrongful Takedowns and Removals The Commission’s definition of disinformation, which is not a legal standard, requires an evaluation of an actor’s motives. In reality, this evaluation is conducted by platforms that want to avoid liability for users’ content, leading them to ‘err on the side of caution and take it down, particularly for controversial or unpopular material’.
[158] This is achieved through the work of content moderators, many unfamiliar with the cultural context of a post, who are given just seconds to make an assessment.
[159] Consequently, there is a high risk, in the context of disinformation, that harmful posts will be removed irrespective of a user’s motive. This risk makes plain the value of lighter touch content moderation tools like labels and warnings, though their ability to mitigate the harm of disinformation is still unknown. By contrast, there may be a low risk of wrongful removal of users for coordinated inauthentic behaviour because those decisions require analyses of patterns of behaviour. Nevertheless, the DSA does not require transparency of these investigations, which is necessary to avoid wrongful removals of users.
5 Conclusions By preserving the framework of the Code, the DSA fails to address the Code’s root problem: an open-ended definition of ‘disinformation’ without a legal basis. More broadly, the DSA reflects an uncertainty of how public bodies should regulate the private actors whose content moderation practices affect the exchange of information. This EU-level uncertainty is playing out in parallel to uncertainty among Member States which, with limited exceptions, have not addressed disinformation in national law.
It may be that litigation within Member States shapes the wider European experience. On this front, Germany has proven the most active Member State. Its courts have ruled to restore users’ access to their accounts as well as to reinstate content. It remains to be seen whether its judgments will serve as a model for judicial intervention in content moderation in other Member States, and whether there is potential for a clash between developing national standards and a European approach. European law has yet to consider the question of to what extent informal state pressure brings the actions of private technology firms within the scope of horizontal application of fundamental rights. Further work in this area is required.
[1] Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare (Farrar, Straus and Giroux 2020).
[2] Commission, ‘Action Plan Against Disinformation’ JOIN/2018/36 final, 5 December 2018.
[3] Ibid 4.
[4] Ibid 9–11.
[5] Commission, ‘Communication on the European Democracy Action Plan’ COM/2020/790, 3 December 2020.
[6] Commission, ‘Guidance on Strengthening the Code of Practice on Disinformation’ COM/2021/262, 26 May 2021 (emphasis added).
[7] Ibid.
[8] Tucker Carlson, ‘Mainstream Media Disinformation More Powerful and Destructive Than QAnon’ (Fox News, 23 February 2021), available at https://www.foxnews.com/opinion/tucker-carlson-media-disinformation-more-powerful-destructive-qanon accessed 27 August 2021.
[9] Adia Robinson and Adam Kelsey, ‘Speaker Pelosi Blames Trump, GOP for Deadlock in Coronavirus Relief Negotiations’ (ABC News, 2 August 2020), available at https://abcnews.go.com/Politics/speaker-pelosi-blames-trump-gop-deadlock-coronavirus-relief/story?id=72121342 accessed 27 August 2021.
[10] Joseph Bernstein, ‘Bad News: Selling the Story of Disinformation’ (Harper’s Magazine, September 2021), available at https://harpers.org/archive/2021/09/bad-news-selling-the-story-of-disinformation/ accessed 26 August 2021.
[11] Bernstein (n 10).
[12] Carol Cadwalladr, ‘“I Made Steve Bannon’s Psychological Warfare Tool”: Meet the Data War Whistleblower’ (The Guardian, 18 March 2018), available at http://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump accessed 6 August 2021; Cecilia Kang and Sheera Frenkel, ‘Facebook and Twitter Have a Message for Lawmakers: We’re Trying’ (The New York Times, 4 September 2018), available at https://www.nytimes.com/2018/09/04/technology/facebook-and-twitter-have-a-message-for-lawmakers-were-trying.html accessed 6 August 2021.
[13] Independent High Level Group on Fake News and Online Disinformation, A Multi-Dimensional Approach to Disinformation (March 2018), available at https://data.europa.eu/doi/10.2759/739290 accessed 6 August 2021.
[14] EU Code of Practice on Disinformation [2018], available at https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation accessed 6 August 2021.
[15] European Commission, ‘Code of Practice on Disinformation’ (Updated 13 July 2021), available at https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation accessed 6 August 2021.
[16] Commission (n 15).
[17] European Commission, ‘Experts Appointed to the High-Level Group on Fake News and Online Disinformation’ (12 January 2018), available at https://wayback.archive-it.org/12090/20210424010927/https://digital-strategy.ec.europa.eu/en/news/experts-appointed-high-level-group-fake-news-and-online-disinformation accessed 2 September 2021.
[18] Independent High Level Group on Fake News and Online Disinformation (n 13).
[19] Ibid.
[20] Nico Schmidt and Daphné Dupont-Nivet, ‘Facebook and Google Pressured EU Experts to Soften Fake News Regulations, Say Insiders’ (openDemocracy, 21 May 2019), available at https://www.opendemocracy.net/en/facebook-and-google-pressured-eu-experts-soften-fake-news-regulations-say-insiders/ accessed 2 September 2021.
[21] Commission, ‘Meeting of the Multi-Stakeholder Forum on Disinformation’ (11 July 2018), available at https://digital-strategy.ec.europa.eu/en/library/meeting-multistakeholder-forum-disinformation accessed 2 September 2021.
[22] Commission, ‘Minutes, Fourth Meeting of the Multi-Stakeholder Forum on Disinformation’ (17 September 2018), available at https://ec.europa.eu/information_society/newsroom/image/document/2019-4/final_minutes_of_4th_meeting_multistakeholder_forum_on_disinformation_002_67AFE6B9-B872-0AAE-0D090C9AB5EEBC77_56666.pdf accessed 2 September 2021.
[23] Ibid.
[24] Ibid.
[25] Sounding Board of the Multistakeholder Forum on Disinformation Online, ‘The Sounding Board’s Unanimous Final Opinion on the So-Called Code of Practice’ (24 September 2018), available at https://www.euractiv.com/wp-content/uploads/sites/2/2018/10/3OpinionoftheSoundingboard-1.pdf accessed 2 September 2021.
[26] EU Code of Practice on Disinformation (n 14) preamble.
[27] Paolo Cesarini, ‘Disinformation During the Digital Era: A European Code of Self-Discipline’ (2019) 6 Annales des Mines, available at http://www.annales.org/site/enjeux-numeriques/DG/2019/DG-2019-06/EnjNum19b_3Cesarini.pdf accessed 2 September 2021.
[28] Jason Pielemeier, ‘Disentangling Disinformation: What Makes Regulating Disinformation So Difficult?’ (2020) 2020(4) Utah Law Review 917, 923.
[29] Del Harvey, ‘Protecting Twitter Users (Sometimes From Themselves)’ (TED2014, March 2014), available at https://www.ted.com/talks/del_harvey_protecting_twitter_users_sometimes_from_themselves accessed 16 September 2021.
[30] Peter Chase, ‘The EU Code of Practice on Disinformation: The Difficulty of Regulating a Nebulous Problem’ (29 August 2019) Working Paper of the Transatlantic Working Group on Content Moderation Online and Freedom of Expression 6 < https://www.ivir.nl/publicaties/download/EU_Code_Practice_Disinformation_Aug_2019.pdf accessed 21 December 2022>.
[31] Commission, ‘Synopsis Report of the Public Consultation on Fake News and Online Disinformation’ (26 April 2018), available at https://wayback.archive-it.org/12090/20210728070511/https://ec.europa.eu/digital-single-market/en/news/synopsis-report-public-consultation-fake-news-and-online-disinformation accessed 3 September 2021.
[32] Commission, ‘Flash Eurobarometer 464 Report: Fake News and Disinformation Online’ (February 2018), available at https://europa.eu/eurobarometer/surveys/detail/2183 accessed 3 September 2021.
[33] Pielemeier (n 28).
[34] Ben Nimmo, ‘The Breakout Scale: Measuring the Impact of Influence Operations’ (Brookings, September 2020), available at https://www.brookings.edu/wp-content/uploads/2020/09/Nimmo_influence_operations_PDF.pdf accessed 15 January 2021.
[35] Ibid.
[36] EU Code of Practice on Disinformation (n 14).
[37] Chase (n 30).
[38] Emily Taylor et al, ‘Industry Responses to the Malicious Use of Social Media’ (Oxford Information Labs, November 2018), available at https://stratcomcoe.org/cuploads/pfiles/web_nato_report_-_industry_responsense.pdf accessed 9 August 2021.
[39] EU Code of Practice on Disinformation (n14) Annex II Current Best Practices From Signatories of the Code of Practice.
[40] European Regulators Group for Audiovisual Media Services (ERGA), ‘Report on Disinformation: Assessment of the Implementation of the Code of Practice’ (2020) 2, available at https://erga-online.eu/wp-content/uploads/2020/05/ERGA-2019-report-published-2020-LQ.pdf accessed 15 September 2021.
[41] Commission, ‘Assessment of the Code of Practice on Disinformation’ SWD/2020/180 final, 10 September 2020.
[42] Commission, ‘Code of Practice on Disinformation: First Annual Reports’ (October 2019) 7, available at https://digital-strategy.ec.europa.eu/en/news/annual-self-assessment-reports-signatories-code-practice-disinformation-2019 accessed 6 September 2021.
[43] Florian Saurwein and Charlotte Spencer-Smith, ‘Combating Disinformation on Social Media: Multilevel Governance and Distributed Accountability in Europe’ (2020) 8 Digital Journalism 820.
[44] Chris Marsden et al, ‘Platform Values and Democratic Elections: How Can the Law Regulate Digital Disinformation?’ (2019) 36 Computer Law & Security Review, available at https://doi.org/10.1016/j.clsr.2019.105373 accessed 24 February 2020.
[45] See, e.g., Aleksandra Kuczerawy, ‘Fighting Online Disinformation: Did the EU Code of Practice Forget about Freedom of Expression?’ in Kużelewska et al (eds) Disinformation and Digital Media as a Challenge for Democracy (Vol 6, Intersentia 2019).
[46] Commission, ‘Progress Report: Code of Practice Against Disinformation’ (29 October 2019), available at https://digital-strategy.ec.europa.eu/en/news/annual-self-assessment-reports-signatories-code-practice-disinformation-2019 accessed 6 August 2021.
[47] ERGA (n 40) 17–9, 24.
[48] Ibid 25–7.
[49] Ibid 28–9.
[50] Paul Butcher, ‘Disinformation and Democracy: The Home Front in the Information War’ (2019) Discussion Paper, European Politics and Institutions Programme, available at https://www.epc.eu/content/PDF/2019/190130_Disinformationdemocracy_PB.pdf accessed 7 September 2021.
[51] ERGA (n 40) 31–4.
[52] Ibid.
[53] Mike Ananny, ‘The Partnership Press: Lessons for Platform-Publisher Collaborations as Facebook and News Outlets Team to Fight Misinformation’ (Tow Center for Digital Journalism 2018), available at https://academiccommons.columbia.edu/doi/10.7916/D85B1JG9 accessed 6 August 2021.
[54] Lauren Teeling and Niamh Kirk, ‘Codecheck: A Review of Platform Compliance with the EC Code of Practice on Disinformation’ (2020) 12–13, available at https://www.researchgate.net/publication/340978676_Codecheck_A_Review_Of_Platform_Compliance_With_The_EC_Code_Of_Practice_On_Disinformation accessed 21 December 2022.
[55] ERGA (n 40) 38.
[56] Ibid.
[57] Mozilla, ‘Data Collection Log — EU Ad Transparency Report, available at https://adtransparency.mozilla.org/eu/log/ accessed 23 September 2021.
[58] French Ambassador for Digital Affairs, ‘Twitter Ads Transparency Center Assessment’, available at https://disinfo.quaidorsay.fr/en/twitter-ads-transparency-center-assessment accessed 23 September 2021.
[59] Rory Smith, ‘The UK Election Showed Just How Unreliable Facebook’s Security System For Elections Really Is’ (BuzzFeed, 14 January 2020), available at https://www.buzzfeednews.com/article/rorysmith/the-uk-election-showed-just-how-unreliable-facebooks accessed 23 September 2021.
[60] Code of Conduct on Countering Illegal Hate Speech Online [2016], available at https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en accessed 18 January 2023; Commission, ‘Recommendation (EU) 2018/334 on Measures to Effectively Tackle Illegal Content Online’ C/2018/1177, 1 March 2018.
[61] Kuczerawy (n 45).
[62] Institute for Information Law, ‘The Legal Framework on the Dissemination of Disinformation Through Internet Services and the Regulation of Political Advertising: Final Report’ (December 2019), 31, available at https://www.ivir.nl/publicaties/download/Report_Disinformation_Dec2019-1.pdf accessed 21 December 2022.
[63] COM/2021/262 (n 6).
[64] Lawrence Lessig, Code Version 2.0 (Basic Books 2006) 135.
[65] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data [2016] OJ L119/1; Council Directive (EU) 2018/1808 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services in view of changing market realities [2018] OJ L303/69 (AVMSD).
[66] Iva Nenadić, ‘Unpacking the “European Approach” to Tackling Challenges of Disinformation and Political Manipulation’ (2019) 8(4) Internet Policy Review, available at https://policyreview.info/node/1436 accessed 1 September 2021.
[67] Ibid.
[68] Google, ‘EC EU Code of Practice on Disinformation Annual Report’ (October 2019) 18.
[69] Commission (n 46).
[70] Facebook, ‘Baseline Report on Implementation of the Code of Practice on Disinformation’ (January 2019) sect 4.6, available at https://ec.europa.eu/information_society/newsroom/image/document/2019-5/facebook_baseline_report_on_implementation_of_the_code_of_practice_on_disinformation_CF161D11-9A54-3E27-65D58168CAC40050_56991.pdf accessed 21 September 2021.
[71] AVMSD (n 65) art 33a(1).
[72] Jean-François Furnémont and Deirdre Kevin, ‘Regulation of Political Advertising: A Comparative Study With Reflections on the Situation in South-East Europe’ (September 2020) 19, 27, 37–8, available at https://rm.coe.int/study-on-political-advertising-eng-final/1680a0c6e0 accessed 7 September 2021.
[73] Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on the Transparency and Targeting of Political Advertising’ COM/2021/731, 25 November 2021.
[74] Jenny Gesley, ‘Germany: Social Media Platforms to Be Held Accountable for Hosted Content Under “Facebook Act”’ (Library of Congress, 11 July 2017), available at https://www.loc.gov/item/global-legal-monitor/2017-07-11/germany-social-media-platforms-to-be-held-accountable-for-hosted-content-under-facebook-act/ accessed 6 August 2021.
[75] Butcher (n 50), 11.
[76] Human Rights Watch, ‘Germany: Flawed Social Media Law’ (24 February 2018), available at https://www.hrw.org/news/2018/02/14/germany-flawed-social-media-law accessed 14 September 2021.
[77] Ibid.
[78] Law No 2018-1202 on the Fight Against the Manipulation of Information 2018 (1), available at https://www.legifrance.gouv.fr/loda/id/JORFTEXT000037847559/ accessed 7 September 2021. Translated from French to English using Google Translate, available at https://translate.google.com/.
[79] Ibid art L163-2(I).
[80] Ibid.
[81] Rachael Craufurd Smith, ‘Fake News, French Law and Democratic Legitimacy: Lessons for the United Kingdom?’ (2019) 11 Journal of Media Law 52, 61.
[82] Ibid.
[83] Karine Barzilai-Nahon, ‘Toward a Theory of Network Gatekeeping: A Framework for Exploring Information Control’ (2008) 59 Journal of the American Society for Information Science and Technology 1493.
[84] Ibid.
[85] Emily B Laidlaw, ‘A Framework for Identifying Internet Information Gatekeepers’ (2010) 24 International Review of Law, Computers & Technology 263, 268.
[86] Matthias C Kettemann and Anna Sophia Tiedeke, ‘Back Up: Can Users Sue Platforms to Reinstate Deleted Content?’ (2020) 9(2) Internet Policy Review, 8–10, available at https://policyreview.info/articles/analysis/back-can-users-sue-platforms-reinstate-deleted-content.
[87] Ibid 9.
[88] Case No C/15/319230 / KG ZA 21-432 (Court of North Holland, 6 October 2021) ECLI:NL:RBNHO:2021:8539.
[89] Ibid.
[90] Martin Fertmann and Matthias C Kettemann (eds), ‘Can Platforms Cancel Politicians? How States and Platforms Deal With Private Power Over Political Actors: An Exploratory Study of 15 Countries’ GDHRNET Working Paper Series 3/2021, 55–57, available at https://www.hans-bredow-institut.de/uploads/media/default/cms/media/o32omsc_GDHRNet-Working_Paper-3.pdf accessed 19 August 2021.
[91] Ibid 55.
[92] EU Code of Practice on Disinformation (n 14).
[93] SWD/2020/180 (n 41).
[94] Chase (n 30).
[95] Paul-Jasper Dittrich, ‘Tackling the Spread of Disinformation’ (2019) Policy Paper, Jacques Delors Institute Berlin, 7, available at http://aei.pitt.edu/102500/1/2019.dec.pdf accessed 1 September 2021.
[96] COM/2021/262 (n 6).
[97] See, e.g., Meta, ‘Facebook Community Standards: Inauthentic Behavior’, available at https://www.facebook.com/communitystandards/inauthentic_behavior accessed 6 August 2021; Twitter, ‘Platform Manipulation and Spam Policy’ (September 2020), available at https://help.twitter.com/en/rules-and-policies/platform-manipulation accessed 6 August 2021.
[98] Britt van den Branden et al, ‘In Between Illegal and Harmful: A Look at the Community Guidelines and Terms of Use of Online Platforms in the Light of the DSA Proposal and the Fundamental Right to Freedom of Expression’ (DSA Observatory, 2 August 2021), available at https://dsa-observatory.eu/2021/08/02/in-between-illegal-and-harmful-a-look-at-the-community-guidelines-and-terms-of-use-of-online-platforms-in-the-light-of-the-dsa-proposal-and-the-fundamental-right-to-freedom-of-expression-part-1-of-3/ accessed 6 August 2021.
[99] JOIN/2018/36 (n 2).
[100] Paolo Cavaliere, ‘From Journalistic Ethics to Fact-Checking Practices: Defining the Standards of Content Governance in the Fight Against Disinformation’ (2020) 12 Journal of Media Law 133.
[101] Ananny (n 53).
[102] See, e.g., Facebook, ‘Annual Report on the Implementation of the Code of Practice for Disinformation’ (29 October 2019), available at https://digital-strategy.ec.europa.eu/en/news/annual-self-assessment-reports-signatories-code-practice-disinformation-2019 accessed 6 August 2021; Commission (n 46).
[103] See, e.g., Meta, ‘Coordinated Inauthentic Behavior’, available at https://about.fb.com/news/tag/coordinated-inauthentic-behavior/ accessed 6 August 2021; Twitter Transparency Center, ‘Information Operations’, available at https://transparency.twitter.com/en/reports/information-operations.html accessed 6 August 2021.
[104] See, e.g., Commission (n 46) and Facebook (n 102).
[105] See, e.g., Commission (n 46) and Facebook (n 102).
[106] SWD/2020/180 (n 41) 10.
[107] Kettemann (n 86).
[108] Bundesgerichtshof, ‘Federal Court of Justice on Claims Against the Provider of a Social Network Who Deleted Posts and Blocked Accounts on Charges of “Hate Speech”, Press Release No. 149/2021’ (29 July 2021), available at https://www.bundesgerichtshof.de/SharedDocs/Pressemitteilungen/DE/2021/2021149.html accessed 9 August 2021.
[109] Ibid.
[110] Ibid.
[111] Commission, ‘Proposal for a Regulation of the European Parliament and the Council on a Single Market for Digital Services (Digital Services Act)’ COM/2020/825, 15 December 2020 (DSA), ch III. All citations to the DSA are to the final version concluded at trilogue. There may be linguistic changes in the forthcoming final text.
[112] Ibid art 2(h).
[113] Ibid recitals 53–4; ch III, sects 3–4.
[114] Ibid recital 54.
[115] Ibid art 13(1)(e).
[116] Ibid arts 24, 30 [117] COM/2021/731 (n 73).
[118] DSA (n 111) arts 17–18.
[119] Ibid art 12(1).
[120] Naomi Appelman, João Pedro Quintais, and Ronan Fahy, ‘Article 12 DSA: Will Platforms Be Required to Apply EU Fundamental Rights in Content Moderation Decisions?’ (DSA Observatory, 31 May 2021), available at https://dsa-observatory.eu/2021/05/31/article-12-dsa-will-platforms-be-required-to-apply-eu-fundamental-rights-in-content-moderation-decisions/ accessed 9 August 2021.
[121] DSA (n 111) art 26(1)(b).
[122] Ananny (n 53).
[123] Meta, ‘Fact-checking Policies on Facebook’, available at https://www.facebook.com/business/help/315131736305613 accessed 9 August 2021.
[124] Emily Taylor et al (n 38).
[125] Facebook, ‘April 2021 Coordinated Inauthentic Behavior Report’, available at https://about.fb.com/wp-content/uploads/2021/05/April-2021-CIB-Report.pdf accessed 9 August 2021.
[126] Ibid.
[127] Facebook, ‘April 2020 Coordinated Inauthentic Behavior Report’, available at https://about.fb.com/wp-content/uploads/2020/05/April-2020-CIB-Report.pdf accessed 9 August 2021.
[128] Facebook, ‘January 2021 Coordinated Inauthentic Behavior Report’, available at https://about.fb.com/wp-content/uploads/2021/02/January-2021-CIB-Report.pdf accessed 9 August 2021.
[129] Facebook, ‘August 2020 Coordinated Inauthentic Behavior Report’, available at https://about.fb.com/wp-content/uploads/2020/09/August-2020-CIB-Report.pdf accessed 9 August 2021.
[130] Meta (n103); Twitter Transparency Center (n 103).
[131] DSA (n 111) art 13(1)(c).
[132] Ibid art 26(1)(c).
[133] Ibid art 26(2).
[134] Ibid art 33(2)(a), (3).
[135] Ibid art 27(2)(a)–(b).
[136] Ibid art 13(1)(d).
[137] Ibid art 23(1)(a).
[138] Lumen, ‘About Us’, available at https://lumendatabase.org/pages/about accessed 9 August 2021.
[139] Carolyn E Schmitt, ‘Shedding Light on Fraudulent Takedown Notices’ (Harvard Law Today, 12 December 2019), available at https://today.law.harvard.edu/shedding-light-on-fraudulent-takedown-notices/ accessed 28 September 2021.
[140] Ibid.
[141] Eugene Volokh, ‘Shenanigans (Internet Takedown Edition)’ (2021) 2021 Utah Law Review 237.
[142] Genevieve Lakier, ‘Informal Government Coercion and the Problem of “Jawboning”’ (Lawfare, 26 July 2021), available at https://www.lawfareblog.com/informal-government-coercion-and-problem-jawboning accessed 9 August 2021.
[143] DSA (n 111) art 12(2).
[144] Appelman et al (n 120).
[145] Ekin v France App no 39288/98 (ECtHR, 17 July 2001) para 58.
[146] Ibid para 60.
[147] Ibid para 61.
[148] Ahmet Yildirim v Turkey App no 3111/10 (ECtHR, 18 December 2012) para 64.
[149] Ibid para 64.
[150] COM/2021/262 (n 6).
[151] Oversight Board, ‘Annual Report 2021’ (June 2022), available at https://oversightboard.com/news/322324590080612-oversight-board-publishes-first-annual-report/ accessed 23 June 2022.
[152] Godlevskiy v Russia App no 14888/03 (ECtHR, 23 October 2008) para 45.
[153] Pedersen and Baadsgaard v Denmark App no 49017/99 (ECtHR, 17 December 2004) para 77; Thorgeir Thorgeirson v Iceland App no 13778/88 (ECtHR, 25 June 1992) para 65; Jersild v Denmark App no 15890/89 (ECtHR, 23 September 1994) para 35; Godlevskiy (n 152) para 45.
[154] Godlevskiy (n 152) para 45.
[155] Kacki v Poland App no 10947/11 (ECtHR, 4 July 2017) para 56.
[156] COM/2021/262 (n 6) 5.
[157] Ibid.
[158] Daphne Keller, ‘Toward a Clearer Conversation About Platform Liability’ (2018) Knight First Amendment Institute, “Emerging Threats” Essay Series, available at https://papers.ssrn.com/abstract=3186867 accessed 9 August 2021.
[159] Sarah Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press 2019).
How Will the EU Digital Services Act Affect the Regulation of Disinformation? February 24, 2023 No Comments ← A Risk-based Approach to AI Regulation: System Categorisation and Explainable AI Practices Regulating Manipulative Artificial Intelligence → Leave a Reply Cancel reply Your email address will not be published.
Required fields are marked * Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam.
Learn how your comment data is processed.
Home About SCRIPTed Submission Guidelines Blog Books for Review Sitemap Contact us Recent blog posts Timed influence: The future of Modern (Family) life and the law September 10, 2021 Predicting Innovation: Why Facebook/WhatsApp Merger Flunked April 1, 2021 Georgia vs. Public.Resource.org: The Morning After June 15, 2020 Online harms and Caroline’s Law – what’s the direction for the law reform? April 13, 2020 SCRIPTed is turning 15! January 14, 2019 Issues About SCRIPTed Submission Guidelines Blog Sitemap Contact us {{FOOTNOTENUM}}.
" |
186 | 2,022 | "Copyright in AI-generated works: Lessons from recent developments in patent law – SCRIPTed" | "https://script-ed.org/article/copyright-in-ai-generated-works-lessons-from-recent-developments-in-patent-law" | "A Journal of Law, Technology & Society https://script-ed.org?p=4036 SCRIPTed A Journal of Law, Technology & Society Menu About SCRIPTed Editorial Board Submission Guidelines Authors Artists Book Reviewers Blog posts Copyright policy Books for Review Issues Current Issue All issues Blog Contact us All issues > Volume 19 > Issue 1 > Copyright in AI-generated works: Lessons from recent developments in patent law Volume 19 , Issue 1 , February 2022 Copyright in AI-generated works: Lessons from recent developments in patent law Rita Matulionyte* and Jyh-An Lee** Download PDF © 2022 Rita Matulionyte and Jyh-An Lee Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License Abstract In Thaler v The Comptroller-General of Patents, Designs and Trade Marks (DABUS) , Smith J. held that an AI owner can possibly claim patent ownership over an AI-generated invention based on their ownership and control of the AI system. This AI-owner approach reveals a new option to allocate property rights over AI-generated output. While this judgment was primarily about inventorship and ownership of AI-generated invention in patent law, it has important implications for copyright law. After analysing the weaknesses of applying existing judicial approaches to copyright ownership of AI-generated works, this paper examines whether the AI-owner approach is a better option for determining copyright ownership of AI-generated works. The paper argues that while contracts can be used to work around the AI-owner approach in scenarios where users want to commercially exploit the outputs, this approach still provides more certainty and less transaction costs for relevant parties than other approaches proposed so far.
Keywords artificial intelligence, computer-generated work, AI-generated work, DABUS, patent, copyright, ownership Cite as: Rita Matulionyte and Jyh-An Lee, "Copyright in AI-generated works: Lessons from recent developments in patent law" (2022) 19:1 SCRIPTed 5 https://script-ed.org/?p=4036 DOI: 10.2966/scrip.190122.5 * Senior Lecturer, Macquarie Law School, Sydney, Australia, rita.matulionyte@mq.edu.au.
** Professor, Faculty of Law, The Chinese University of Hong Kong, Hong Kong SAR.
1 Introduction Artificial Intelligence (AI) technologies, such as machine learning (ML), are used widely in both the public and private sectors in various applications, such as online advertising, medical research and diagnosis, and facial recognition.
[1] AI/ML technologies are also being increasingly adopted by creative industries to generate outputs that would normally be protected under copyright law, such as art, [2] music, [3] novels, [4] and even film scripts.
[5] Some of these creative outputs have been successfully commercialised, such as the Portrait of Edmund Belamy, an AI-generated work that was sold at a Christie’s auction for $432,500.
[6] In the last few years, legal professionals, academics, [7] and policy bodies around the world [8] have been actively discussing whether AI-generated works are, and should be, protected under copyright laws and, if so, who should own the copyright. Except for a few jurisdictions that have provisions on computer-generated works (e.g., the UK, Ireland, New Zealand, India, and Hong Kong) in their copyright laws, [9] AI-generated works are not easily copyrighted in most countries because such laws require human authorship for copyright protection.
[10] At the policy level, there is no consensus on whether copyright law protection should be extended to AI-generated works. While some suggest that works autonomously generated by AI (as opposed to AI-assisted works) do not need copyright protection, [11] others argue that granting copyright protection for such works would increase incentives to develop sophisticated AI technology and eventually lead to more original creations reaching the public.
[12] Many who believe protection is desirable have considered different ownership allocation options, such as allocating authorship and initial ownership to the coder/developer of the AI, the user of the AI, or even to the AI itself.
[13] However, no consensus has been reached on which option is the most suitable.
Similarly, there have been discussions on whether AI-generated outputs should be protected under patent law, who should be considered the inventor, and who should own an AI-generated invention. While some have suggested that AI-generated outputs should be protected by patent law, [14] others have expressed serious concerns about the impacts that patent protection over AI-generated outputs could have.
[15] Those who agree that patent protection should be awarded in such situations have proposed suggestions on the allocation of inventorship and initial ownership, such as allocating initial ownership to the AI itself, [16] to the owner of the AI system (i.e., ‘computer’ or ‘machine’), [17] to the user of the AI system, [18] or to the hardware or software developer.
[19] The recent DABUS case concerned application for a patent wherein the AI system DABUS was listed as an inventor of the claimed invention. The application was initially lodged before the US and UK patent offices and subsequently extended to other national offices, including the European Patent Office and national patent offices in Germany, Japan, South Korea, Australia, and Israel. The patent application was rejected by the US, UK, European, and Australian patent offices, all of which concluded that an AI system is not eligible for inventorship.
[20] All of these decisions were appealed, and the High Court of England and Wales (UK) was the first court to assess – and eventually reject – the appeal of the decision in Thaler v The Comptroller-General of Patents, Designs And Trade Marks (‘ DABUS’ ).
[21] Interestingly, while the High Court of England and Wales held that, under current UK patent law, the AI machine DABUS cannot be listed as an inventor, the court recognised the possibility of listing the owner of the AI DABUS (in this case, Thaler) as both the inventor and the owner of the AI-generated invention.
[22] Although the DABUS case is primarily about inventorship in patent law, we argue that it also provides important implications for the debate on AI and copyright. Specifically, we consider whether Smith J’s viewpoint on patent ownership allocation, as indicated above, could be applied in copyright law. Namely, we aim to determine if it is reasonable to assign ownership of AI-generated works to the AI owner. While this option has been, to some extent, discussed in commentary on patent law, [23] it has not been significantly discussed in relation to copyright law.
This article revisits the existing discussion on copyright ownership of AI-generated works and critically assesses whether allocating ownership of AI-generated works to the AI owner is a more desirable option than those proposed previously. This paper does not weigh in on the important debate over whether copyright should subsist in AI-generated works, which has been discussed by a significant portion of the literature.
[24] Rather, assuming that such protection is desirable, this paper focuses on who should own the copyright of AI-generated works. Based on court decisions regarding copyright protection over computer-generated works in the UK and China, the paper first discusses the strengths and weaknesses of the most frequently proposed options for ownership of AI-generated works: the AI software developer and the AI software user. It then examines the patent ownership allocation rule proposed in DABUS and considers whether this rule should be applied in determining copyright for AI-generated works. The paper concludes by identifying the main advantages and issues that an AI-owner rule would pose in the domain of copyright law.
2 Current approaches to copyright ownership of AI-generated works: Software developer or user? Those who argue that AI-generated works should be subject to copyright protection have different viewpoints regarding who should be the owner of such works. The most common proposals are that such copyright should be allocated to the AI software developer, the AI software user, or even the AI itself.
[25] Discussions on the ownership of AI-generated works are becoming less hypothetical as courts in a few jurisdictions have begun to confront the issue.
In this section, we use English and Chinese cases as examples illustrating existing judicial approaches to this issue. We focus on two approaches adopted in English and Chinese case law: the software developer as the owner and the software user as the owner of AI-generated works. We do not discuss the option of allocating ownership to the AI itself, both because this approach has never been taken by any court and because AI lacks the legal personhood necessary to have legal rights.
[26] Thus, from both a theoretical and practical perspective, the proposal of allocating the copyright to AI itself has been ruled out.
2.1 The software developer as the owner Software developers provide an essential contribution to the creation of AI-generated works. Although these works are not directly created by software developers, they would not exist in the first place without the developer’s software.
[27] It stands to reason, then, that software developers have the potential for ownership of AI-generated works produced using their software. The concept of a software developer is broadly defined: It can be used to refer to an individual programmer who develops the software, or it can be used to refer to a company that hires programmers to develop the software that is subsequently owned by the company.
This section introduces the Nova case from the UK and the Tencent case from China, in which the courts ruled that the software developers were the rightful owners of the computer-generated works. In Nova, provisions in the Copyright, Designs and Patents Act (CDPA) 1988 were used to determine copyright ownership of the computer-generated works, and Tencent used originality doctrine. Both courts emphasised that the justification behind giving software developers ownership of the computer-generated outputs was that they had substantially determined how the outputs were arranged.
2.1.1 The Nova Case in the UK The CDPA 1988 in the United Kingdom (UK) provides copyright protection for literary, dramatic, musical, and artistic works generated by computers under circumstances without human author.
[28] In other words, for a computer-generated work in the UK, human authorship is irrelevant to whether the work is copyrightable. The CDPA 1988 further stipulates that the author of the computer-generated work is ‘the person by whom the arrangements necessary for the creation of the work are undertaken’.
[29] Some commentators view the computer-generated work provisions in the CDPA 1988 as innovative, [30] and some believe it was the first legislation in the world protecting copyright in the context of AI.
[31] While commenters in some countries have advocated adopting the computer-generated works provisions from the CDPA 1998 to cope with new challenges raised by AI technologies, [32] the British courts have only applied these provisions once: in Nova Productions v Mazooma Games and Others , a case which did not involve any AI technology.
[33] The work concerned was the display of a series of composite frames generated by a computer program using bitmap files in a coin-operated game ‘Pocket Money’ that was designed, manufactured, and sold by the claimant Nova Productions Limited (‘Nova’).
[34] Kitchin J in Nova considered whether the computer-generated work in a computer game belonged to the programmer or the user: In so far as each composite frame is a computer generated work then the arrangements necessary for the creation of the work were undertaken by [the programmer] Mr. Jones because he devised the appearance of the various elements of the game and the rules and logic by which each frame is generated and he wrote the relevant computer program. In these circumstances I am satisfied that Mr. Jones is the person by whom the arrangements necessary for the creation of the works were undertaken and therefore is deemed to be the author by virtue of s.9(3).
[35] As for the role of the player/user in the game, Kitchin J ruled the following: The appearance of any particular screen depends to some extent on the way the game is being played. For example, when the rotary knob is turned the cue rotates around the cue ball. Similarly, the power of the shot is affected by the precise moment the player chooses to press the play button. The player is not, however, an author of any of the artistic works created in the successive frame images. His input is not artistic in nature and he has contributed no skill or labour of an artistic kind. Nor has he undertaken any of the arrangements necessary for the creation of the frame images. All he has done is to play the game.
[36] While Kitchin J’s analysis in Nova seems plausible in determining copyright ownership between the programmer and player in the video game, allocating copyright to the programmer instead of the user of the computer-generated work is not always self-evident in all applications of software technologies and, in particular, AI technologies. First, AI algorithms are different from traditional software, as the former requires a huge volume of data with which to train the machine. Because there are other equally important stakeholders, such as trainers and data providers, involved in the development of the AI software, programmers are not the only party that enable the operation of an AI application. Second, while software developers provide step-by-step instructions for the machine to follow in traditional computer programming, AI algorithms function through the observation of data instead of encoded instructions.
[37] Therefore, software developers have much less control over how a work is generated by the algorithm in the AI environment than in traditional computer programming. Consequently, the legal treatment of a software developer as determined in Nova might need to be adjusted based on AI’s technical character. Last, but not least, there are many scenarios other than video games where the works are generated because of users’ operation of the software. If users generate commercially valuable content for their own business purposes, they will certainly have more interest in using the content than video game players and software developers.
[38] Therefore, assigning copyright of the AI-generated works to the software developer is not always straightforward.
2.1.2 Tencent case in China Most jurisdictions do not have computer-generated work provisions in their copyright laws like the UK and a few other commonwealth jurisdictions do.
[39] For example, in the United States and most European countries, AI-generated works are not copyrightable because of the absence of human creativity in their creation.
[40] Thus, it is challenging for software developers to claim ownership over works autonomously generated by AI. However, in the recent Chinese case Tencent v Shanghai Yingxun Technology Co. Ltd , Tencent successfully convinced the court that the software developer contributed originality to an AI-generated work and, therefore, should be its owner.
[41] From a comparative law perspective, this is an exceptional case, as it is unusual for the court to rule that AI developers exercised skill and judgment in an AI-generated work.
The disputed work in Tencent was an article about the Shanghai stock market written by the plaintiff’s AI software Dreamwriter.
[42] Dreamwriter collected data from multiple sources, analysed the data using its machine-learning algorithms, verified the data, wrote an article using the verified data, and then published it.
[43] The defendant argued that the article was not copyrightable because there was no human creativity involved in its production.
[44] However, the court was convinced by the plaintiff that human originality could be found in different phases of Dreamwriter’s process of creating the article. The court explained that, although it only took Dreamwriter two minutes to produce the disputed article which was the result of the software’s operation of established rules, algorithms, and templates without any human participation, the automatic operation of Dreamwriter did not occur without a reason.
[45] They also noted that the software was not self-aware.
[46] Instead, Dreamwriter’s autonomous operation reflected its developers’ personalised selection and arrangement of data type, data format, the conditions that triggered the writing of the article, the templates of article structure, the setting of the corpus, and the training of the intelligent verification algorithm model.
[47] The court in Tencent viewed the software developer as the owner of the AI-generated work based on originality doctrine in copyright law. The way that the court determined originality was similar to that typically applied in cases involving compilation, which was that ‘the selection or arrangement of…[existing] contents constitute intellectual creations’.
[48] The court determined that originality existed in the developer’s choices in setting the criteria for the selection and arrangement of existing data, which was subsequently used by the AI to complete the selection and arrangement.
[49] 2.2 Software user as the owner – the Feilin case in China While the plaintiff’s strategy in Tencent for proving the developer’s contribution in the AI-generated work was successful in that litigation, the court’s finding of originality is not applicable to all AI creations. Because of their nested non-linear structure, AI models are usually applied in a black-box manner. Therefore, their ‘interpretability’ or ‘explainability’ – that is, the degree to which a human observer can intrinsically understand the cause of a decision by the system – has drawn significant attention in recent years.
[50] Sometimes even AI developers are unable to fully understand AIs’ decision-making process or predict the systems decisions or outputs.
[51] Thus, there are flaws in the argument that all AI works are well designed and that their products can be anticipated by their developers. In other words, it is conceivable that not all parts of an AI work reflect the developer’s skill or judgment and, hence, the finding of originality in Tencent is not universally applicable. Moreove, sometimes the involvements of other parties, such as machie operators, trainers, and data providers, are essential in the production AI-generated works.
[52] Many AI developers are not able to substantially envisage the AI-genearted works because they cannot control or plan other parties’ data provision or processing behaviours.
[53] The role of these developers is much more marginal than those in Nova and Tencent in the production of computer-genarted works. This difference also reveals that the software developer as the owner approach is not an universally justified and ideal option.
Not all courts held that software developers are justified to be the owners of computer-generated works. Some have argued that software’s users are the appropriate owners of AI-generated works because they provide considerable inputs into shaping the outputs.
[54] Also, a software user might be more economically affected by the ownership allocation of AI-generated works than the software developer because the former deploys the AI software to produce output for his or her own commercial interest.
In the recent Chinese case Feilin v Baidu , which was decided by the Beijing Internet Court (BIC), the disputed work was an article titled “Judicial Big Data in the Film, Television and Entertainment Industry” published by the plaintiff.
[55] The defendant argued that the article was not copyrightable because it was purely the result of the plaintiff’s search in the Wolters Kluwer legal database.
[56] The result was presented by the Wolters Kluwer Database as an analytical report, which included statistics and corresponding charts on types of claims, procedures, industries involved, amount of the claims, decision-making time, courts, judges, lawyers and firms, and frequently cited statutes in court decisions concerning the entertainment industry.
[57] The court eventually ruled for the plaintiff because the latter created original content other than the search result in the disputed article; however, the court also shed light on the ownership issue with regard to the results of the search using the Wolters Kluwer Database.
[58] The court explained that there were two key players involved in the process: the programmer who developed the database software and the user who used the database to produce the search results.
[59] They determined that neither the programmer nor the user could be the author of the search result: The programmer did not search in the database by imputing keywords, and thus the search result was not a reflection of his original expression, [60] and the user only typed in the keywords used to search the database, which was not an original expression under copyright law either. Thus, the search result was created by the Wolters Kluwer Database based on the input keywords, algorithms, rules, and models.
[61] However, Wolters Kluwer Database was not an author because it is not considered a natural person under the law.
[62] Interestingly, the court went beyond the existing law to analyse policy issues regarding the legal rights over the search result. Recognising the commercial and communicative value of computer-generated works, the court indicated that allocating certain rights over the works to private parties was better than leaving them in the public domain.
[63] Between the software developer and user, the court determined that it was the latter that deserved legal protection.
[64] The argument was, first, that the developer had already recouped their investment in developing the software via a licensing fee or ownership of intellectual property rights into software.
[65] Second, compared to the software developer, the software user had more incentive to use and disseminate the computer-generated works because they had typed in the keywords to initiate the search and had a plan for the use of the works.
[66] Thus, assigning rights to the computer-generated works to the user rather than the software developer would better foster cultural and scientific development, as the user had substantive incentive to use and disseminate the works.
[67] The above reasoning of the BIC was not made using the existing Chinese copyright law, and it was not the primary conclusion of the judgment. It was, at most, the judge’s personal normative viewpoint. Nevertheless, this reasoning presents a different position in favour of the software user rather than the developer regarding ownership of computer-generated and/or AI-generated works. While the BIC was correct that the software user in this case had greater interest in using the resulting works than the software developer, the assignment of relevant rights to software users has the same problem as the conclusion of the Nova rule, which was to assign the ownership to the software developer. As exemplified in Nova and Feilin , software users’ interests in the resulting works vary from case to case. Although the user in Feilin had more substantial interests in utilising the resulting works than the software developers, not all users of AI algorithm or software have similar interests.
[68] Moreover, while some users contribute significantly to the AI-generated works, others’ contribution is negligible.
[69] Thus, neither the reasoning in Feilin nor that of Nova can be the singular determinant of the optimal solution in all cases involving AI-generated works. Moreover, in Feilin , the user only typed in “film” as the search keyword, and the analytical report was automatically produced by the Wolters Kluwer Database.
[70] Given the user’s negligible contribution to the resulting work and their insignificant investment in the software system, assigning an exclusive right of ownership to them might not be justified. While the user has a substantial interest in utilising the search result, a license from a more legitimate owner could serve the same function.
3 New: The AI owner as the owner of AI-generated outputs The above analysis has shown the weaknesses of the current proposals to allocate copyright ownership of AI-generated works to either software developers or users. In this section, we will explore another option, which is to allocate copyright of such works to the AI owner, as suggested in the UK DABUS decision. Although DABUS concerns patent inventorship and ownership, we argue that the ownership allocation rule proposed in the case also has important implications for copyright. This section will examine whether allocating ownership of AI-generated works to the AI owner would be a more viable solution than those previously discussed.
3.1 Why the patent law debate is relevant Patent law and copyright law are similarly premised on the economic rationale of incentivising creativity and innovation. Thus, legal doctrines from these two fields often influence each other. For example, in Metro-Goldwyn-Mayer Studios, Inc. v Grokster Ltd.
, the US Supreme Court borrowed from patent law to establish liability for inducement in copyright infringement.
[71] When extending the staple article of commerce doctrine from patent law to copyright law in Sony Corp. of America v Universal City Studios, Inc., the US Supreme Court explained that although copyright law and patent law “are not identical twins”, their similarities made patent law an appropriate source from which to borrow.
[72] Likewise, many scholars have advocated for more harmonisation of the rules governing ownership in these two fields.
[73] Despite differences, patent law and copyright law share substantially similar rules on initial ownership allocation. Under copyright law, the author of the work is the physical (natural) person who created the work (a ‘romantic author/creator’ idea), [74] and they would also normally be the initial owner of the work.
[75] As an exception, works created by an employee in the course of employment are owned by the employer, unless there is a contract stating otherwise.
[76] Likewise, under patent law, the inventor is usually the natural person who conceived the invention, [77] and they are also the initial owner of the invention. Like copyright law, in cases involving an employment relationship, the employer is automatically the first owner of the invention and, eventually, of the patent.
[78] DABUS triggered an interesting inquiry concerning IP ownership of AI-generated output. While the High Court of England and Wales confirmed that an AI system cannot be listed as an inventor, it opened up the possibility of listing an owner of AI as both the inventor and the owner of the patent on an AI-generated invention. Given the above similarities of rules governing ownership in patent law and copyright law, it is worth investigating whether the allocation of the initial ownership rule concerning AI-generated output in DABUS could be suitable in a copyright law context.
[79] 3.2 DABUS and the (potential) ‘owner of AI’ rule In DABUS , the High Court of England and Wales concluded that, under the Patent Act 1977 (UK), the inventor must be a natural person.
[80] What is more relevant for the purpose of this article is the court’s suggestion that, in cases involving AI-generated inventions, the AI owner should be the owner of the invention. According to Smith J, (…)there is a general rule that the owner of a thing is owner of the fruits of that thing. Thus, the owner of a fruit tree will generally own the fruit produced by that tree.
[81] Smith J suggested that this analogy applies in considering ownership of AI-generated inventions. As a result, the court concluded that the owner of the DABUS system should own the system’s outputs.
[82] This may be the first court decision indicating that the AI owner should also be the owner of IP rights over the AI-generated output. We refer to this ownership allocation rule as the AI-owner approach.
Additionally, it is worth noting that Thaler, who was the patent applicant, was not a mere ‘owner’; he was also the person who created this machine, patented it, possessed it, and used it to generate an invention claimed in the subject’s patent application. Smith J was aware of this and stated that Thaler could “rely on this ownership and control of DABUS” to claim his entitlement of patent.
[83] He then made a further reservation: I proceed on the basis that Dr Thaler is the only person involved in the ownership and operation of DABUS. If – contrary to my conclusion – ownership or something like it were sufficient to effect a transfer of the invention or the right to apply for a patent, it would be necessary to articulate clearly what forms of ownership and/or control would suffice. These are not matters that I need to consider in this judgment.
[84] This suggests that while the court is generally ready to accept that the AI owner is the owner of the patent in AI-generated outputs, further discussion is needed on what role control of AI plays in allocating ownership over AI-generated outputs. We address this question later in the paper.
[85] Below, we first assess the strengths and weaknesses of this AI-owner approach in the area of patent law and then examine its applicability in copyright law.
3.3 The AI-owner approach in a patent law context The ownership allocation rule in DABUS, or similar rules, has appeared in the patent law literature.
[86] For instance, Ryan Abbott, who is one of coordinators of DABUS litigation around the world, strongly supports this ownership allocation option.
[87] According to Abbott, the main reason for allocating initial ownership of AI-generated inventions to the owner of AI is “because this is most consistent with the rules governing ownership of property and it would most incentivise innovation”.
[88] Namely, allocating ownership to the AI owner as opposed to the user would arguably incentivise the provision of access to the AI and, thus, more innovation. For instance, IBM’s Watson program, which was initially designed to compete on the game show Jeopardy! and to invent new food recipes, was subsequently made available to different software application providers.
[89] This enabled them to create services with Watson’s capabilities, and Watson is now assisting with financial planning, the development of treatment plans for cancer patients, the identification of potential research study participants, distinguishing genetic profiles that might respond well to certain drugs, and acting as a personal travel concierge.
[90] If Watson invented something while being used by other users and those users owned the invention by default, IBM would be disincentivised to give access to Watson to other users. If, however, users wanted to own Watson-generated inventions, this would require an agreement and possibly a fee given to IBM.
[91] Unsurprisingly, IBM has expressed its support for the AI-owner approach in patent law.
[92] On the other hand, this AI-owner approach might raise a few issues. For example, what if the owner of AI does not contribute anything substantial to the invention – should they still own it? If DABUS AI was sold to a company that uses it to make inventions but does not contribute to these inventions in any substantial way, is it reasonable that a (new) DABUS owner also owns patents on inventions generated by DABUS? While some might question the viability of this result, [93] we suggest that such an outcome is reasonable. Ownership does not require any substantial contribution; rather, it is about the amount of investment. IP generated in the course of employment and IP assignment are both notable examples. If you invested in ownership, you should own the outputs. If you bought a garden with apple trees, you own the apples, even if you did not invest in planting and taking care of the garden initially. A similar rationale underlies rules on copyright ownership of computer-generated works in the CDPA 1988. Scholars have argued that computer-generated works under CDPA 1988 are nearer to entrepreneurial works than to authorial works because their production does not involve human creativity.
[94] Authorial works are protected in copyright law due to the originality and creativity contributed by their authors, whereas entrepreneurial works are protected to incentivise investment in making specific works available to the public.
[95] If we follow this line of reasoning, it would be reasonable to assign ownership of a computer-generated work to the person investing in the production of the work rather than the person contributing originality to the work.
Secondly, if the owner of the AI is the owner of the AI-generated invention, AI developers may prefer licensing their AI over selling it.
[96] This possible development may be reinforced by the practice of the information technology (IT) industry, where software is seldom ‘sold’ or, using IP law terminology, assigned. Instead, software is normally licensed under the terms of sole, exclusive, or non-exclusive license.
[97] If such practice remains in the AI industry, developers would in most cases remain the owners of AI even in cases where they give exclusive or sole licenses to users who then actually control the AI system and plan to commercially exploit the resulting AI-generated invention. We discuss this issue in a subsequent section.
[98] 3.4 AI-owner approach in copyright law context In the context of copyright, allocating ownership of AI-generated creative works to the AI owner is an innovative idea. It is different from other options, such as the AI software developer or AI software user as an owner of AI-generated works, as discussed in section 2. In some cases, the software developer, user, and owner will coincide, and the issue of how to distinguish them will not arise. Thaler, who developed, owned, and used DABUS, is a classic example in patent law. In copyright law, it is also possible that a company develops an AI and uses it to generate works, thus giving them copyright in those works (assuming that the AI-generated works are protected under copyright in the first place). The Tencent case in China discussed above is an example of such a scenario.
[99] In other cases, the developer, user, and owner might be three (or more) different parties. For instance, a software company (developer) may develop an AI system, sell it to another company (owner) which then licenses it to consumers (users) who use it to generate works. The debate over whether the AI developer, AI user, or the AI owner should be the owner of the AI-generated works makes sense when these three roles are played by different parties. In this section, we will identify the advantages and challenges of the proposal of implementing the AI owner rule in the copyright context and analyse whether this approach is more viable than previously discussed ownership allocation options.
3.4.1 Advantages Nominating the AI owner as the owner of the AI-generated works has several advantages. First, it might be difficult to determine who – the AI developer, AI user or a third party, such as data trainer or provider – made sufficient contribution (or ‘necessary arrangements’ under the CDPA 1988 in the UK) to the final output of the AI.
[100] If ownership is allocated to the AI owner, there is no need to determine who made what input or whose input (developer’s or user’s) is the most indispensable to the AI generative process. The allocation is much more straightforward: if you own the AI, you own its AI-generated outputs. This ensures more legal certainty and foreseeability. It also reflects the general principles of property law: if you own an apple tree, you own the apples.
Second, both the software-developer-as-the-owner and the user-as-the-owner approaches emphasise the essential contribution made by the parties in the creation of the output. Therefore, both approaches face the dilemma of potentially leaving AI-generated works in the public domain if neither the developer nor the user has made sufficient or direct contributions to the final output.
[101] If the AI owner is considered to be the owner of the AI-generated outputs, such a problem is unlikely to emerge.
[102] If we agree that IP protection for AI-generated works is a desirable policy, it is clear that the AI owner rule can achieve this policy goal with fewer transaction costs than the other two approaches.
Third, co-ownership situations are less likely to arise or are likely to be less problematic if the AI owner rule is applied. If ownership of AI-generated outputs is allocated to a person who contributed to the final outputs, there might be situations where both the AI developer and AI user provided sufficient contributions and, thus, qualify as co-owners.
[103] In such situations, the exercise of rights might become difficult and costly, especially in situations without a pre-established commercial relationship. The significant transaction costs of such co-ownership will make the copyright of AI-generated work less valuable and, consequently, its consumption may be below an socially optimal level.
[104] For instance, if a musician uses Google’s AI system Magenta to generate music [105] and both Google and the musician are recognised as co-owners of the AI-generated music, the exercise of rights to the song would become difficult. The musician might need Google’s permission to license or transfer the rights, and vice versa.
[106] If the AI owner is given ownership over the AI-generated outputs, a co-ownership situation is less likely to emerge. Most often, AI is developed by a single company. Even if the AI is developed by several persons or companies and, therefore, several people co-own the AI, [107] it would be easier for them to manage the co-ownership relationship since they already have a working relationship. There might be instances where several AI modules, owned by different entities, are used to produce the final output (eg AI-enabled software writing the text and software editing the output), which may lead to co-ownership situations without pre-existing relationship between owners. However, it is to be seen how frequently these complex situations arise and whether they could be tackled through contractual arrangements discussed below.
Finally, as discussed above, if owners of the AI are allocated ownership over the AI-generated outputs, they would have incentives to make the AI system available to users: regardless of the contributions of the users, the AI owners would own the outputs of the AI. Arguably, IBM would not have given users access to the AI Watson if the company had not been able to claim ownership over the outputs that Watson generated.
[108] At the same time, other ownership arrangements could be made if users are not satisfied with this default rule. For instance, if the user wants to own AI-generated outputs, they might agree to pay (higher) licensing fees, which would compensate for the investments that the AI owner made in developing or acquiring the AI system. Similarly, if AI developer is not interested in keeping ownership over AI-generated outputs (as well as any responsibilities that it may cause), they may contractually assign their rights to the outputs to the AI user/licensee.
3.4.2 Challenges At the same time, the AI-owner rule seems unreasonable in situations where an AI system is licensed to another party who uses it to produce commercially valuable outputs over which the party would expect to have exclusive control. For example, if an IT company develops an AI system that produces media articles and licenses it to a media company, the latter would expect that they own, or at least can exclusively use, the media articles autonomously produced by the AI system. If they do not enjoy such rights over the AI outputs and thus are restricted from using them in their commercial practice, they would be discouraged from using the AI system in the first place. Similar concerns were revealed in the Feilin case in which the court held that legal rights, if any, should be assigned to the user of the software instead of its owner or developer.
[109] This issue becomes even more obvious when an AI system is licensed to end users who have no negotiating power to influence the terms of license. For instance, the user of the AI system Magenta might invest time and effort in making Magenta generate a song they like, and it might seem unreasonable if the user does not own, or at least cannot exploit, the work according to their business plans. These end users do not have sufficient bargaining power to negotiate terms that are different from the standard terms of use provided by the platform, especially if they are able to access the system for free. Therefore, considering the business practicability of such users, the AI-owner rule does not seem to be an ideal option.
This may be the reason that in DABUS , Smith J held that it was reasonable to allocate ownership of the AI-generated invention to Thaler, the AI owner, assuming that he was the only person who both owned and controlled the AI when it produced the invention.
[110] Smith J also reasoned that Thaler could claim ownership over the AI-generated invention because he controlled the DABUS AI system. Smith J may have been considering a situation wherein the AI system was licenced to a party that was not the AI owner and that party planned to commercially exploit the AI-generated invention. To put it differently, Smith J implied that Thaler might be the owner of the invention generated by DABUS because he had not licenced the AI system to another party to generate new inventions. This understanding of DABUS leads to two questions concerning the AI-owner approach. First, should this approach require that the AI owner have legal and factual control over the AI in order to own the AI-generated output? Second, is the AI-owner rule still feasible given the possible scenario wherein an AI system is licensed to another party for commercial use? As to the first question, we believe adding the control factor to the AI-owner rule does not help. The infeasibility of adding this is obvious. When AI is used by a licensee (i.e., one party owns the AI and the other party has control over the AI), neither party would be able to meet the requirements of the own-and-control test. In other words, the ownership allocation rule will fail to identify the appropriate owner, and the AI-generated output would, therefore, be in the public domain. If we believe that AI-generated works should be protected by IP, putting them in the public domain is certainly not desirable.
As to the second question, we trust that the AI-owner rule is still a reasonable default rule for ownership allocation because, in addition to the advantages identified in section 3.4.1, the market itself can properly regulate the situation where the licensee is the primary user of the AI. All default property allocation rules only provide a common parameter for various dimensions concerning the subject property, and these rules are always subject to contractual adjustment by private parties. If AI owners do not provide users with sufficient rights to use or commercialise the content they produce using an AI system, users may stop using the system and shift to other AI software vendors. Consequently, such AI owners will likely be pressured by market competition to set acceptable licensing terms.
In summary, an ideal default rule for initial ownership allocation can, under normal conditions, reduce transaction costs between parties.
[111] While the rule is desirable for most parties, other parties can easily contract around it with low transaction costs.
[112] Based on this understanding, we argue that the AI-owner approach is a feasible option for ownership allocation because it provides significant legal certainty in ownership and leads to transaction costs lower than those brought by the person-who-made-necessary-arragments (AI-software-developer or the AI-software-user) approach. Although, like all default property allocation rules, the AI-owner approach cannot address property interests in every social relationship, it can be adjusted by private ordering through contractual arrangements.
4 Conclusion The goal of this paper was to revisit the discussion on copyright ownership of AI-generated content and to provide an original analysis of whether the AI-owner rule, as recently proposed in DABUS by the High Court of England and Wales, could be a more viable ownership allocation option than other approaches proposed so far (mainly, necessariy arrangement test who allocates ownership either to software-developer or software-user). This paper has demonstrated that while the AI-owners rule may not properly address the end user’s commercial considerations in certain situations, this issue can usually be resolved by market competition and private ordering. More importantly, the AI-owner rule provides legal certainty and generates lower transaction costs than the previously proposed approaches.
[1] E.g. Zeynep Tufekci, “How Recommendation Algorithms Run the World” ( Wired , 22 April 2019), available at https://www.wired.com/story/how-recommendation-algorithms-run-the-world/ (accessed 20 April 2021); Hafizah Osman, “New AI Tech Reshapes Skin Cancer Detection” ( Healthcareit , 30 January 2019), available at https://www.healthcareit.com.au/article/new-ai-tech-reshapes-skin-cancer-detection (accessed 20 April 2021); Yason Tashea, “Courts Are Using AI to Sentence Criminals. That Must Stop Now” ( Wired , 17 April 2017), available at https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/ (accessed 20 April 2021); Asha Barbaschow, “AFP used Clearview AI Facial Recognition Software to Counter Child Exploitation” (ZDnet, 15 April 2020) , available at https://www.zdnet.com/article/afp-used-clearview-ai-facial-recognition-software-to-counter-child-exploitation/ (accessed 20 April 2021).
[2] Gabe Cohn, “AI Art at Christie’s Sells for $432,500” ( The New York Times , 25 October 2018), available at https://www.nytimes.com/2018/10/25/arts/design/ai-art-sold-christies.html (accessed 20 April 2021).
[3] “Warner Music Signs First Ever Record Deal with an Algorithm”, ( The Guardian , 23 March 2019), available at https://www.theguardian.com/music/2019/mar/22/algorithm-endel-signs-warner-music-first-ever-record-deal (accessed 20 April 2021).
[4] Chloe Olewitz, “A Japanese A.I. Program Just Wrote a Short Novel, and it Almost Won a Literary Prize” ( Digital Trends , 23 March 2016), available at https://www.digitaltrends.com/cool-tech/japanese-ai-writes-novel-passes-first-round-nationanl-literary-prize/ (accessed 20 April 2021).
[5] Annalie Newitz, “Movie Written by Algorithm Turns Out to Be Hilarious and Intense” ( ArsTechnica , 6 September 2016), available at https://arstechnica.com/gaming/2016/06/an-ai-wrote-this-movie-and-its-strangely-moving/ (accessed 20 April 2021).
[6] Cohn, supra n. 2.
[7] E.g. Courtney White and Rita Matulionyte, “Artificial Intelligence Painting a Larger Picture on Copyright” (2020) 30(4) Australian Intellectual Property Review 224-242; Russ Pearlman, “Recognizing Artificial Intelligence (AI) As Authors and Inventors Under U.S. Intellectual Property Law” (2018) 24(2) Richmond Journal of Law and Technology 1-38; Ana Ramalho, “Will Robots Rule The (Artistic) World? A Proposed Model For The Legal Status Of Creations By Artificial Intelligence Systems” (2017) 21(1) Journal of Internet Law 12-25; Rex M. Shoyama, “Intelligent Agents: Authors, Makers, and Owners of Computer-Generated works in Canadian Copyright Law” (2005) 4(2) Canadian Journal of Law and Technology 129-140; Julia Dickenson, Alex Morgan, and Birgit Clark, “Creative Machines: Ownership of Copyright in Content Created by Artificial Intelligence Applications” (2017) 39 European Intellectual Property Review 457-460, pp. 457-458; Tim W Dornis, “Artificial Creativity: Emergent Works and the Void in Current Copyright Doctrine” (2020) 22 Yale Journal of Law & Technology 1-60, pp. 20-24; Andres Guadamuz, “Do Androids Dream of Electric Copyright? Comparative Analysis of Originality in Artificial Intelligence Generated Works” (2017) 2 Intellectual Property Quarterly 169-186, pp. 182-183; Amir H Khoury, “Intellectual Property Rights for ‘Hubots’: On the Legal Implications of Human-Like Robots as Innovators and Creators” (2017) 35(3) Cardozo Arts and Entertainment Law Journal 635-668; Massimo Maggiore, “Artificial Intelligence, Computer Generated Works and Copyright” in Enrico Bonadio and Nicola Lucchi (eds.
) Non-Conventional Copyright : Do New and Atypical Works Deserve Protection? (Cheltenham: Edward Elgar, 2018), pp. 387-389.
[8] E.g. “UK Government Consultation on Artificial Intelligence and Intellectual Property”, available at https://www.gov.uk/government/consultations/artificial-intelligence-and-intellectual-property-call-for-views (accessed 20 April 2021); European Parliament resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies (2020/2015(INI)); USPTO, “Public Views on Artificial Intelligence and Intellectual Property” (October 2020), available at https://www.uspto.gov/sites/default/files/documents/USPTO_AI-Report_2020-10-07.pdf (accessed 20 April 2021); The WIPO Conversation on Artificial Intelligence and Intellectual Property, available at www.wipo.org (accessed 20 April 2021).
[9] See Copyright, Design and Patents Act 1988 (UK), s. 9(3); Copyright Act 1994 (NZ), s. 5(2)(a); Copyright Act 1957 (India), (2)(d)(vi); Copyright Ordinance (Hong Kong) cap 528, s. 11(3); Copyright and Related Rights Act 2000 (Ireland), s. 21(f).
[10] E.g. Jani Ihalainen “Computer Creativity: Artificial Intelligence and Copyright” (2018) 13(9) Journal of Intellectual Property Law & Practice 724-728, pp. 726-727; Paul Lambert, “Computer-Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine Learning” (2017) 39(1) European Intellectual Property Review 12-20, p. 14; Maggiore, supra n. 7, pp. 387-389; Mark Perry and Thomas Marhoni, “From Music Tracks to Google Maps: Who Owns Computer-Generated Works?” (2010) 26(6) Computer Law & Security Review 621-629, pp. 624-625; Ramalho, supra n. 7, pp. 14-16; Jacob Turner, Robot Rules Regulating Artificial Intelligence (Basingstoke: Palgrave Macmillan, 2019), pp. 123-124.
[11] E.g. Patrick Zurth, “Artificial Creativity? A Case Against Copyright Protection for AI Generated Works”, UCLA Journal of Law & Technology (forthcoming).
[12] E.g. Shlomit Yanisky-Ravid and Luis Antonio Velez-Hernandez, “Copyrightability of Artworks Produced by Creative Robots, Driven by Artificial Intelligence Systems and the Originality Requirement: The Formality-Objective Model” (2018) 19(1) Minnesota Journal of Law, Science & Technology 1-54; Pearlman, supra n. 7; Jane C. Ginsburg and Luke A. Budiardjo, “Authors and Machines”, (2019) 34(2) Berkeley Technology Law Journal 343-448.
[13] E.g. Shlomit and Velez-Hernandez, supra n. 12 (suggesting that AI should hold copyright); Pearlman, supra n. 7; Ginsburg and Budiardjo, supra n. 12 (suggesting different ownership allocation options depending on contributions).
[14] See Ryan Abbott, “I Think, Therefore I Invent: Creative Computers and the Future of Patent Law” (2016) 57(4) Boston College Law Review 1079-1126; Erica Fraser, “Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent Law” (2016) 13(3) SCRIPTed 305-333, p. 328; W. Michael Schuster, “Artificial Intelligence and Patent Ownership” (2018) 75(4) Washington & Lee Law Review 1945-2004.
[15] E.g. Ryan Abbott, “Hal the Inventor: Big Data and Its Use by Artificial Intelligence” in C Sugimoto, H Ekbia and M Mattioli (eds), Big Data Is Not a Monolith (Cambridge: MIT Press, 2016); L Floridi, The Fourth Revolution: How the Infosphere is Reshaping Human Reality (Oxford: Oxford University Press, 2014), p. 129; L Vertinsky and T Rice, “Thinking About Thinking Machines: Implications Of Machine Inventors For Patent Law” (2002) 8(2) Boston University Journal of Science & Technology Law 574-613, p. 586.
[16] Colin R. Davies, “An Evolutionary Step in Intellectual Property Rights – Artificial Intelligence and Intellectual Property” (2011) 27(6) Computer Law & Security Review 601-619, p. 617.
[17] Abbott, supra n. 14; Davies, supra n. 16, p. 618; see also Vertinsky and Rice, supra n. 15, p. 609.
[18] See Schuster, supra n. 14, pp. 1985-1988.
[19] Ben Hattenbach and Joshua Glucoft, “Patents in An Era of Infinite Monkeys and Artificial Intelligence” (2015) 19(1) Stanford Technology Law Review 32-51, pp. 48-49.
[20] I bid.
[21] Thaler v The Comptroller-General of Patents, Designs And Trade Marks [2020] EWHC 2412 (Pat).
[22] It should be noted that while the Court of Appeals upheld the High Court’s decision on 21 September 2021, the former did not specifically address whether Thaler could be listed as both the inventor and the owner of the AI-generated invention.
Thaler v The Comptroller-General of Patents, Designs And Trade Marks [2021] EWCA Civ 1374.
[23] Abbott, supra n. 14; Davies, supra n. 16, p. 618; see also Vertinsky and Rice, supra n. 15, p. 609.
[24] E.g. Jyh-An Lee, “Computer-generated Works under the CDPA 1988” in Jyh-An Lee, Reto Hilty and Kung-Chung Liu (eds), Artificial Intelligence and Intellectual Property (Oxford: Oxford University Press, 2021), pp. 183-194; White and Matulionyte, supra n. 7, pp. 232-236; Burkhard Schafer at al., “A Fourth Law of Robotics? Copyright and the Law and Ethics of Machine Co-Production” (2015) 23 Artificial Intelligence and Law 217-240, pp. 227-230.
[25] Text accompanying n. 3.
[26] Annemarie Bridy, “Coding Creativity: Copyright and the Artificially Intelligent Author” (2012) 2012 Stanford Technology Law Review 5-28, p. 51; Eliza Mik, “AI as a Legal Person?” in Jyh-An Lee, Reto Hilty and Kung-Chung Liu (eds.), Artificial Intelligence and Intellectual Property (Oxford University Press, 2021), pp. 430-436; White and Matulionyte, supra n. 7, p. 237.
[27] Robert Yu, “The Machine Author: What Level of Copyright Protection Is Appropriate for Fully Independent Computer-Generated Works?” (2017) 165(5) University of Pennsylvania Law Review 1245-1270, p. 1258.
[28] Copyright, Design and Patents Act (CDPA) 1988, s. 9(3), s. 178.
[29] CDPA 1988, s. 9(3).
[30] Ysolde Gendreau, “Copyright Ownership of Photographs in Anglo-American Law” (1993) 15(6) European Intellectual Property Review 207-211, pp. 210-211.
[31] Toby Bond and Sarah Blair, “Artificial Intelligence and Copyright: Section 9(3) or Authorship without an Author” (2019) 14(6) Journal of Intellectual Property Law & Practice 423.
[32] Bridy, supra n. 26, pp. 66-67; Cody Weyhofen, “Scaling the Meta-Mountain: Deep Reinforcement Learning Algorithms and the Computer-Authorship Debate” (2019) 87(4) UMKC Law Review 979-996, p. 996.
[33] Nova Productions Ltd v Mazooma Games Ltd [2006] EWHC 24 (Ch) (20 January 2006).
[34] Ibid.
, paras. 12-18.
[35] Ibid.
, paras. 105-106.
[36] Ibid.
[37] Megan Sword, “To Err Is Both Human and Non-Human” (2019) 88(1) UMKC Law Review 211-233, p. 213.
[38] E.g. the Feilin case in section 2.2.
[39] It should be noted that not all Commonwealth jurisdictions have computer-generated work clauses similar to those in the CDPA 1988. Australia is a notable example; the courts ruled that computer-generated works were not copyrightable because there was no human author and the works thus lacked originality – see IceTV [2009] HCA 14.
[40] Enrico Bonadio, Luke McDonagh, and Christopher Arvidsson, “Intellectual Property Aspects of Robotics” (2018) 9(4) European Journal of Risk Regulation 655-676, p. 669; Jeremy A. Cubert and Richard G.A. Bone, “The Law of Intellectual Property Created by Artificial Intelligence” in Woodrow Barfield and Ugo Pagallo (eds.) Research Handbook on the Law of Artificial Intelligence (Cheltenham: Edward Elgar, 2018), pp. 424-425; Madeleine de Cock Buning, “Autonomous Intelligent Systems as Creative Agents under the EU Framework for Intellectual Property” (2016) 7(2) European Journal of Risk Regulation 310-322, pp. 314-315; Julia Dickenson, Alex Morgan, and Birgit Clark, “Creative Machines: Ownership of Copyright in Content Created by Artificial Intelligence Applications” (2017) 39(8) European Intellectual Property Review 457-460, pp. 457-458; Dornis, supra n. 7, pp. 20-24; Guadamuz, supra n. 7, pp. 182-183; Ihalainen, supra n. 10, pp. 726-727; Lambert, supra n. 10, p. 14; Maggiore, supra n. 7, pp. 387-389; Perry and Marhoni, supra n. 10, pp. 624-625; Ramalho, supra n. 7, pp. 14-16; Turner, supra n. 10, pp. 123-124.
[41] Tencent v. Shanghai Yingxun Technology Co. Ltd , People’s Court of Nanshan (District of Shenzhen) (2019) Yue 0305 Min Chu No. 14010 (深圳市南山区人民法院(2019)粤0305民初14010号民事判决), 24 December 2019.
[42] Ibid.
[43] Ibid.
[44] Ibid.
[45] Ibid.
[46] Ibid.
[47] Ibid.
[48] The Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS), Art. 10; WIPO Copyright Treaty, Art. 5.
[49] Tencent v. Shanghai Yingxun Technology Co. Ltd , supra n. 41.
[50] Yavar Bathaee, “The Artificial Intelligence Black Box and the Failure of Intent and Causation” (2018) 31(2) Harvard Journal of Law & Technology 889-938, pp. 901-906; Ashley Deeks, “The Judicial Demand for Explainable Artificial Intelligence” (2019) 119(7) Columbia Law Review 1829-1850, pp. 1832-1838.
[51] Nadia Banteka, “Artificially Intelligent Persons” (2021) 58(3) Houston Law Review 537-596 , pp. 547-548; Jonathan A Schnader, “Mal-Who? Mal-What? Wal-Where? The Future Cyber-Threat of A Non-Fiction Neuromance: Legally Un-attributable, Cyberspace-Bound, Decentralized Autonomous Entities” (2019) 21(2) North Carolina Journal of Law & Technology 1-40, p. 34.
[52] Lee, supra n. 24, p. 192.
[53] Whita and Matulionyte, supra n. 7, p. 238.
[54] Yu, supra n. 27, p. 1259.
[55] Feilin v Baidu , Beijing Internet Court, (2018) Jing 0491 Min Chu No. 239 (北京互联网法院 (2018) 京0491民初239号民事判决), 26 April 2019.
[56] Ibid.
[57] Ibid.
[58] Ibid.
[59] Ibid.
[60] Ibid.
[61] Ibid.
[62] Ibid.
[63] Ibid.
[64] Ibid.
[65] Ibid.
[66] Ibid.
[67] Ibid.
[68] E.g. the Nova case in section 2.1.1.
[69] E.g. Whita and Matulionyte, supra n. 7, p. 239.
[70] Feilin v Baidu , supra n. 55.
[71] 545 U.S. 913, pp. 934-935 (2005).
[72] 464 U.S. 417, p. 439 (1984).
[73] E.g. Joshua L. Simmons, “Inventions Made for Hire” (2012) 2(1) New York University Journal of Intellectual Property and Entertainment Law 1-50, pp. 43-47.
[74] E.g. Christopher Aide, “A More Comprehensive Soul: Romantic Conceptions of Authorship and the Copyright Doctrine of Moral Right” (1990) 48(2) University of Toronto Faculty of Law Review 211-228.
[75] The exception would the rule relating to employee’s works, discussed below. Also, this is different in case of neighboring or related rights recordings, broadcasts or cinematographic films. Under most copyright laws, there is no ‘author’ of these types of subject matter and the initial owners are those who produced the work (ie record company, broadcaster, film maker). Notably, underlying works (such as music, text) would still have authors. One of the exception is the UK, where the ‘author’ is defined broadly and includes not only a creator but also a producer of a music recording or a broadcast, as well as a publisher of an edition, see CDPA 1988 s. 9(2).
[76] E.g. CDPA 1988 (UK), s. 11(2); Copyright Act 1986 (Australia), s. 35(6).
[77] E.g. UK Patent Act 1977, s. 7(3), (‘actual deviser’); for further discussion see Andrew Stewart et al., Intellectual Property in Australia (New York: Lexis Nexis, 2018), p. 469.
[78] Stewart et al., supra n. 77, pp. 473-481.
[79] As indicated in the introduction, we will focus on initial ownership only, and leave the question of authorship outside the scope of this paper.
[80] DABUS , para. 35.
[81] DABUS , para. 49(3)(a).
[82] DABUS , para. 49(2).
[83] DABUS , para. 49(2).
[84] DABUS , n. 34.
[85] See section 3.4.2 below.
[86] See Abbott, supra n. 14, pp. 1114-1117; for other overview of other ownership allocation proposals see Pearlman, supra n. 7, pp. 25-30.
[87] Abbott, supra n. 14, pp. 1114-1117.
[88] Ibid, pp. 1113-1114.
[89] Ibid, pp. 1089-1090.
[90] Ibid, p. 1091.
[91] Ibid, p. 1115.
[92] USPTO, supra n. 8, p. 7.
[93] Abbott, supra n. 14, p. 1116.
[94] Bond and Blair, supra n. 31, p. 423; Lionel Bently and Brad Sherman, Intellectual Property Law (Oxford Univeristy Press, 4th ed., 2014), p. 117; Dornis, supra n. 7, pp. 44-46; Lambert, supra n. 10, pp. 13, 18; Maggiore, supra n. 7, p. 398.
[95] Richard Arnold, “Content Copyrights and Signal Copyrights: The Case for a Rational Scheme of Protection” (2011) 1(3) Queen Mary Journal of Intellectual Property 272-279, p. 277; Lee, supra n. 24, p. 184-186.
[96] See Abbott, supra n. 14, p. 1116.
[97] For the definitions of these see e.g. Stewart et al., supra n. 77, pp. 848-855.
[98] See section 3.4.2 below.
[99] See section 2.1.2 above.
[100] See discussion above.
[101] Ginsburg and Budiardjo, supra n. 12, pp. 533-445.
[102] Certainly, there might be disputes as to the ownership of AI system, especially if it was developed outside employment relationship or if data used to train system was not properly acquired or licensed. However, the ownership of AI falls outside the scope of this paper.
[103] E.g. Ginsburg and Budiardjo, supra n. 12, p. 440.
[104] Jyh-An Lee, “Copyright Divisibility and the Anticommons” (2016) 32(1) American University International Law Review 117-164, pp. 124-130.
[105] See https://magenta.tensorflow.org/ (accessed 20 April 2021).
[106] E.g. Stewart et al., supra n. 77, p. 197.
[107] This might happen e.g. when a few or a group of individuals develop AI system outside an employment relationship; or where a few companies or organizations are collaborating to develop the AI system.
[108] Abbott, supra n. 14, p. 1115.
[109] See section 2.2 above.
[110] DABUS , n. 34.
[111] E.g. Dan L. Burk and Brett H. McDonnell, “The Goldilocks Hypothesis: Balancing Intellectual Property Rights at the Boundary of the Firm” (2007) 2 University of Illinois Law Review 575-636, p. 618.
[112] Richard S. Murphy, “Property Rights in Personal Information: An Economic Defense of Privacy” (1996) 84(7) Georgetown Law Journal 2381-2418, p. 2412.
Copyright in AI-generated works: Lessons from recent developments in patent law February 27, 2022 No Comments ← Editorial Editorial Introduction → Leave a Reply Cancel reply Your email address will not be published.
Required fields are marked * Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment.
This site uses Akismet to reduce spam.
Learn how your comment data is processed.
Home About SCRIPTed Submission Guidelines Blog Books for Review Sitemap Contact us Recent blog posts Timed influence: The future of Modern (Family) life and the law September 10, 2021 Predicting Innovation: Why Facebook/WhatsApp Merger Flunked April 1, 2021 Georgia vs. Public.Resource.org: The Morning After June 15, 2020 Online harms and Caroline’s Law – what’s the direction for the law reform? April 13, 2020 SCRIPTed is turning 15! January 14, 2019 Issues About SCRIPTed Submission Guidelines Blog Sitemap Contact us {{FOOTNOTENUM}}.
" |
187 | 2,023 | "ChatGPT Is a Blurry JPEG of the Web | The New Yorker" | "https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web" | "Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Annals of Artificial Intelligence ChatGPT Is a Blurry JPEG of the Web By Ted Chiang Play/Pause Button Pause Illustration by Vivek Thakker Save this story Save this story Save this story Save this story In 2013, workers at a German construction company noticed something odd about their Xerox photocopier: when they made a copy of the floor plan of a house, the copy differed from the original in a subtle but significant way. In the original floor plan, each of the house’s three rooms was accompanied by a rectangle specifying its area: the rooms were 14.13, 21.11, and 17.42 square metres, respectively. However, in the photocopy, all three rooms were labelled as being 14.13 square metres in size. The company contacted the computer scientist David Kriesel to investigate this seemingly inconceivable result. They needed a computer scientist because a modern Xerox photocopier doesn’t use the physical xerographic process popularized in the nineteen-sixties. Instead, it scans the document digitally, and then prints the resulting image file. Combine that with the fact that virtually every digital image file is compressed to save space, and a solution to the mystery begins to suggest itself.
Compressing a file requires two steps: first, the encoding, during which the file is converted into a more compact format, and then the decoding, whereby the process is reversed. If the restored file is identical to the original, then the compression process is described as lossless: no information has been discarded. By contrast, if the restored file is only an approximation of the original, the compression is described as lossy: some information has been discarded and is now unrecoverable. Lossless compression is what’s typically used for text files and computer programs, because those are domains in which even a single incorrect character has the potential to be disastrous. Lossy compression is often used for photos, audio, and video in situations in which absolute accuracy isn’t essential. Most of the time, we don’t notice if a picture, song, or movie isn’t perfectly reproduced. The loss in fidelity becomes more perceptible only as files are squeezed very tightly. In those cases, we notice what are known as compression artifacts: the fuzziness of the smallest JPEG and MPEG images, or the tinny sound of low-bit-rate MP3s.
Xerox photocopiers use a lossy compression format known as JBIG 2, designed for use with black-and-white images. To save space, the copier identifies similar-looking regions in the image and stores a single copy for all of them; when the file is decompressed, it uses that copy repeatedly to reconstruct the image. It turned out that the photocopier had judged the labels specifying the area of the rooms to be similar enough that it needed to store only one of them—14.13—and it reused that one for all three rooms when printing the floor plan.
The fact that Xerox photocopiers use a lossy compression format instead of a lossless one isn’t, in itself, a problem. The problem is that the photocopiers were degrading the image in a subtle way, in which the compression artifacts weren’t immediately recognizable. If the photocopier simply produced blurry printouts, everyone would know that they weren’t accurate reproductions of the originals. What led to problems was the fact that the photocopier was producing numbers that were readable but incorrect; it made the copies seem accurate when they weren’t. (In 2014, Xerox released a patch to correct this issue.) I think that this incident with the Xerox photocopier is worth bearing in mind today, as we consider OpenAI’s ChatGPT and other similar programs, which A.I. researchers call large language models. The resemblance between a photocopier and a large language model might not be immediately apparent—but consider the following scenario. Imagine that you’re about to lose your access to the Internet forever. In preparation, you plan to create a compressed copy of all the text on the Web, so that you can store it on a private server. Unfortunately, your private server has only one per cent of the space needed; you can’t use a lossless compression algorithm if you want everything to fit. Instead, you write a lossy algorithm that identifies statistical regularities in the text and stores them in a specialized file format. Because you have virtually unlimited computational power to throw at this task, your algorithm can identify extraordinarily nuanced statistical regularities, and this allows you to achieve the desired compression ratio of a hundred to one.
Now, losing your Internet access isn’t quite so terrible; you’ve got all the information on the Web stored on your server. The only catch is that, because the text has been so highly compressed, you can’t look for information by searching for an exact quote; you’ll never get an exact match, because the words aren’t what’s being stored. To solve this problem, you create an interface that accepts queries in the form of questions and responds with answers that convey the gist of what you have on your server.
What I’ve described sounds a lot like ChatGPT , or most any other large language model. Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG , but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
This analogy makes even more sense when we remember that a common technique used by lossy compression algorithms is interpolation—that is, estimating what’s missing by looking at what’s on either side of the gap. When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them. (“When in the Course of human events, it becomes necessary for one to separate his garments from their mates, in order to maintain the cleanliness and order thereof. . . .”) ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.
Given that large language models like ChatGPT are often extolled as the cutting edge of artificial intelligence, it may sound dismissive—or at least deflating—to describe them as lossy text-compression algorithms. I do think that this perspective offers a useful corrective to the tendency to anthropomorphize large language models, but there is another aspect to the compression analogy that is worth considering. Since 2006, an A.I. researcher named Marcus Hutter has offered a cash reward—known as the Prize for Compressing Human Knowledge, or the Hutter Prize—to anyone who can losslessly compress a specific one-gigabyte snapshot of Wikipedia smaller than the previous prize-winner did. You have probably encountered files compressed using the zip file format. The zip format reduces Hutter’s one-gigabyte file to about three hundred megabytes; the most recent prize-winner has managed to reduce it to a hundred and fifteen megabytes. This isn’t just an exercise in smooshing. Hutter believes that better text compression will be instrumental in the creation of human-level artificial intelligence, in part because the greatest degree of compression can be achieved by understanding the text.
To grasp the proposed relationship between compression and understanding, imagine that you have a text file containing a million examples of addition, subtraction, multiplication, and division. Although any compression algorithm could reduce the size of this file, the way to achieve the greatest compression ratio would probably be to derive the principles of arithmetic and then write the code for a calculator program. Using a calculator, you could perfectly reconstruct not just the million examples in the file but any other example of arithmetic that you might encounter in the future. The same logic applies to the problem of compressing a slice of Wikipedia. If a compression program knows that force equals mass times acceleration, it can discard a lot of words when compressing the pages about physics because it will be able to reconstruct them. Likewise, the more the program knows about supply and demand, the more words it can discard when compressing the pages about economics, and so forth.
Large language models identify statistical regularities in text. Any analysis of the text of the Web will reveal that phrases like “supply is low” often appear in close proximity to phrases like “prices rise.” A chatbot that incorporates this correlation might, when asked a question about the effect of supply shortages, respond with an answer about prices increasing. If a large language model has compiled a vast number of correlations between economic terms—so many that it can offer plausible responses to a wide variety of questions—should we say that it actually understands economic theory? Models like ChatGPT aren’t eligible for the Hutter Prize for a variety of reasons, one of which is that they don’t reconstruct the original text precisely—i.e., they don’t perform lossless compression. But is it possible that their lossy compression nonetheless indicates real understanding of the sort that A.I. researchers are interested in? Let’s go back to the example of arithmetic. If you ask GPT-3 (the large-language model that ChatGPT was built from ) to add or subtract a pair of numbers, it almost always responds with the correct answer when the numbers have only two digits. But its accuracy worsens significantly with larger numbers, falling to ten per cent when the numbers have five digits. Most of the correct answers that GPT-3 gives are not found on the Web—there aren’t many Web pages that contain the text “245 + 821,” for example—so it’s not engaged in simple memorization. But, despite ingesting a vast amount of information, it hasn’t been able to derive the principles of arithmetic, either. A close examination of GPT-3’s incorrect answers suggests that it doesn’t carry the “1” when performing arithmetic. The Web certainly contains explanations of carrying the “1,” but GPT-3 isn’t able to incorporate those explanations. GPT-3’s statistical analysis of examples of arithmetic enables it to produce a superficial approximation of the real thing, but no more than that.
Given GPT-3’s failure at a subject taught in elementary school, how can we explain the fact that it sometimes appears to perform well at writing college-level essays? Even though large language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory. Perhaps arithmetic is a special case, one for which large language models are poorly suited. Is it possible that, in areas outside addition and subtraction, statistical regularities in text actually do correspond to genuine knowledge of the real world? I think there’s a simpler explanation. Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
A lot of uses have been proposed for large language models. Thinking about them as blurry JPEG s offers a way to evaluate what they might or might not be well suited for. Let’s consider a few scenarios.
Can large language models take the place of traditional search engines? For us to have confidence in them, we would need to know that they haven’t been fed propaganda and conspiracy theories—we’d need to know that the JPEG is capturing the right sections of the Web. But, even if a large language model includes only the information we want, there’s still the matter of blurriness. There’s a type of blurriness that is acceptable, which is the re-stating of information in different words. Then there’s the blurriness of outright fabrication, which we consider unacceptable when we’re looking for facts. It’s not clear that it’s technically possible to retain the acceptable kind of blurriness while eliminating the unacceptable kind, but I expect that we’ll find out in the near future.
Even if it is possible to restrict large language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.
There is very little information available about OpenAI’s forthcoming successor to ChatGPT, GPT-4. But I’m going to make a prediction: when assembling the vast amount of text used to train GPT-4, the people at OpenAI will have made every effort to exclude material generated by ChatGPT or any other large language model. If this turns out to be the case, it will serve as unintentional confirmation that the analogy between large language models and lossy compression is useful. Repeatedly resaving a JPEG creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.
Indeed, a useful criterion for gauging a large language model’s quality might be the willingness of a company to use the text that it generates as training material for a new model. If the output of ChatGPT isn’t good enough for GPT-4, we might take that as an indicator that it’s not good enough for us, either. Conversely, if a model starts generating text so good that it can be used to train new models, then that should give us confidence in the quality of that text. (I suspect that such an outcome would require a major breakthrough in the techniques used to build these models.) If and when we start seeing models producing output that’s as good as their input, then the analogy of lossy compression will no longer be applicable.
Can large language models help humans with the creation of original writing? To answer that, we need to be specific about what we mean by that question. There is a genre of art known as Xerox art, or photocopy art, in which artists use the distinctive properties of photocopiers as creative tools. Something along those lines is surely possible with the photocopier that is ChatGPT, so, in that sense, the answer is yes. But I don’t think that anyone would claim that photocopiers have become an essential tool in the creation of art; the vast majority of artists don’t use them in their creative process, and no one argues that they’re putting themselves at a disadvantage with that choice.
So let’s assume that we’re not talking about a new genre of writing that’s analogous to Xerox art. Given that stipulation, can the text generated by large language models be a useful starting point for writers to build off when writing something original, whether it’s fiction or nonfiction? Will letting a large language model handle the boilerplate allow writers to focus their attention on the really creative parts? Obviously, no one can speak for all writers, but let me make the argument that starting with a blurry copy of unoriginal work isn’t a good way to create original work. If you’re a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isn’t wasted; on the contrary, I would suggest that it is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose. Having students write essays isn’t merely a way to test their grasp of the material; it gives them experience in articulating their thoughts. If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
And it’s not the case that, once you have ceased to be a student, you can safely use the template that a large language model provides. The struggle to express your thoughts doesn’t disappear once you graduate—it can take place every time you start drafting a new piece. Sometimes it’s only in the process of writing that you discover your original ideas. Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance. Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.
There’s nothing magical or mystical about writing, but it involves more than placing an existing document on an unreliable photocopier and pressing the Print button. It’s possible that, in the future, we will build an A.I. that is capable of writing good prose based on nothing but its own experience of the world. The day we achieve that will be momentous indeed—but that day lies far beyond our prediction horizon. In the meantime, it’s reasonable to ask, What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren’t losing our access to the Internet. So just how much use is a blurry JPEG , when you still have the original? ♦ More Science and Technology Can we stop runaway A.I.
? Saving the climate will depend on blue-collar workers. Can we train enough of them before time runs out ? There are ways of controlling A.I.—but first we need to stop mythologizing it.
A security camera for the entire planet.
What’s the point of reading writing by humans ? A heat shield for the most important ice on Earth.
The climate solutions we can’t live without.
Sign up for our daily newsletter to receive the best stories from The New Yorker.
More: Artificial Intelligence (A.I.) Internet Writing Images Algorithms Technology Daily E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Daily Shouts By Liana Finck A Reporter at Large By Dan Kaufman The Political Scene Podcast The New Yorker Interview By Vinson Cunningham Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q.
Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info
" |
188 | 2,020 | "The Rise and Fall of Getting Things Done | The New Yorker" | "https://www.newyorker.com/tech/annals-of-technology/the-rise-and-fall-of-getting-things-done" | "Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Office Space The Rise and Fall of Getting Things Done By Cal Newport As the obligations of knowledge work have grown increasingly frenetic, workers have flocked to productivity tools and techniques.
Illustration by Timo Lenzen Save this story Save this story Save this story Save this story In the early two-thousands, Merlin Mann, a Web designer and avowed Macintosh enthusiast, was working as a freelance project manager for software companies. He had held similar roles for years, so he knew the ins and outs of the job; he was surprised, therefore, to find that he was overwhelmed—not by the intellectual aspects of his work but by the many small administrative tasks, such as scheduling conference calls, that bubbled up from a turbulent stream of e-mail messages. “I was in this batting cage, deluged with information,” he told me recently. “I went to college. I was smart. Why was I having such a hard time?” Mann wasn’t alone in his frustration. In the nineteen-nineties, the spread of e-mail had transformed knowledge work. With nearly all friction removed from professional communication, anyone could bother anyone else at any time. Many e-mails brought obligations: to answer a question, look into a lead, arrange a meeting, or provide feedback. Work lives that had once been sequential—two or three blocks of work, broken up by meetings and phone calls—became frantic, improvisational, and impossibly overloaded. “E-mail is a ball of uncertainty that represents anxiety,” Mann said, reflecting on this period.
In 2003, he came across a book that seemed to address his frustrations. It was titled “ Getting Things Done: The Art of Stress-Free Productivity ,” and, for Mann, it changed everything. The time-management system it described, called G.T.D., had been developed by David Allen, a consultant turned entrepreneur who lived in the crunchy mountain town of Ojai, California. Allen combined ideas from Zen Buddhism with the strict organizational techniques he’d honed while advising corporate clients. He proposed a theory about how our minds work: when we try to keep track of obligations in our heads, we create “open loops” that make us anxious. That anxiety, in turn, reduces our ability to think effectively. If we could avoid worrying about what we were supposed to be doing, we could focus more fully on what we were actually doing, achieving what Allen called a “mind like water.” To maintain such a mind, one must deal with new obligations before they can become entrenched as open loops. G.T.D.’s solution is a multi-step system. It begins with what Allen describes as full capture: the idea is to maintain a set of in-boxes into which you can drop obligations as soon as they arise. One such in-box might be a physical tray on your desk; when you suddenly remember that you need to finish a task before an upcoming meeting, you can jot a reminder on a piece of paper, toss it in the tray, and, without breaking concentration, return to whatever it was you were doing. Throughout the day, you might add similar thoughts to other in-boxes, such as a list on your computer or a pocket notebook. But jotting down notes isn’t, in itself, enough to close the loops; your mind must trust that you will return to your in-boxes and process what’s inside them. Allen calls this final, crucial step regular review. During reviews, you transform your haphazard reminders into concrete “next actions,” then enter them onto a master list.
This list can now provide a motive force for your efforts. In his book, Allen recommends organizing the master list into contexts, such as @phone or @computer. Moving through the day, you can simply look at the tasks listed under your current context and execute them one after another. Allen uses the analogy of cranking widgets to describe this calmly mechanical approach to work. It’s a rigorous system for the generation of serenity.
To someone with Mann’s engineering sensibility, the precision of G.T.D. was appealing, and the method itself seemed ripe for optimization. In September, 2004, Mann started a blog called 43 Folders—a reference to an organizational hack, the “tickler file,” described in Allen’s book. In an introductory post, Mann wrote, “Believe me, if you keep finding that the water of your life has somehow run onto the floor, GTD may be just the drinking glass you need to get things back together.” He published nine posts about G.T.D. during the blog’s first month. The discussion was often highly technical: in one post, he proposed the creation of a unified XML format for G.T.D. data, which would allow different apps to display the same tasks in multiple formats, including “graphical map, outline, RDF, structured text.” He told me that the writer Cory Doctorow linked to an early 43 Folders post on Doctorow’s popular nerd-culture site, Boing Boing. Traffic surged. Mann soon announced that, in just thirty days, 43 Folders had received over a hundred and fifty thousand unique visitors. (“That’s just nuts,” he wrote.) The site became so popular that Mann quit his job to work on it full time. As his influence grew, he popularized a new term for the genre that he was helping to create: “productivity pr0n,” an adaptation of the “leet speak,” or geek lingo, word for pornography. The hunger for this pr0n, he noticed, was insatiable. People were desperate to tinker with their productivity systems.
What Mann and his fellow-enthusiasts were doing felt perfectly natural: they were trying to be more productive in a knowledge-work environment that seemed increasingly frenetic and harder to control. What they didn’t realize was that they were reacting to a profound shift in the workplace that had gone largely unnoticed.
Before there was “personal productivity,” there was just productivity: a measure of how much a worker could produce in a fixed interval of time. At the turn of the twentieth century, Frederick Taylor and his acolytes had studied the physical movements of factory workers, looking for places to save time and reduce costs. It wasn’t immediately obvious how this industrial concept of productivity might be adapted from the assembly line to the office. A major figure in this translation was Peter Drucker, the influential business scholar who is widely regarded as the creator of modern management theory.
Drucker was born in Austria in 1909. His parents, Adolph and Caroline, held evening salons that were attended by Friedrich Hayek and Joseph Schumpeter, among other economic luminaries. The intellectual energy of these salons seemed to inspire Drucker’s own productivity: he wrote thirty-nine books, the last shortly before his death, at the age of ninety-five. His career took off after the publication of his second book, “ The Future of Industrial Man ,” in 1942, when he was a thirty-three-year-old professor at Bennington College. The book asked how an “industrial society”—one unfolding within “the entirely new physical reality which Western man has created as his habitat since James Watt invented the steam engine”—might best be structured to respect human freedom and dignity. Arriving in the midst of an industrial world war, the book found a wide audience. After reading it, the management team at General Motors invited Drucker to spend two years studying the operations of what was then the world’s largest corporation. The 1946 book that resulted from that engagement, “ Concept of the Corporation ,” was one of the first to look seriously at how big organizations actually got work done. It laid the foundation for treating management as a subject that could be studied analytically.
In the nineteen-fifties, the American economy began to move from manual labor toward cognitive work. Drucker helped business leaders understand this transformation. In his 1959 book, “ Landmarks of Tomorrow ,” he coined the term “knowledge work,” and argued that autonomy would be the central feature of the new corporate world. Drucker predicted that corporate profits would depend on mental effort, and that each individual knowledge worker, possessing skills too specialized to be broken down into “repetitive, simple, mechanical motions” choreographed from above, would need to decide how to “apply his knowledge as a professional” and monitor his own productivity. “The knowledge worker cannot be supervised closely or in detail,” Drucker wrote, in “ The Effective Executive ,” from 1967. “He must direct himself.” Drucker’s emphasis on the autonomy of knowledge workers made sense, as there was no obvious way to deconstruct the efforts required by newly important mid-century jobs—like corporate research and development or advertisement copywriting—into assembly-line-style sequences of optimized steps. But Drucker was also influenced by the politics of the Cold War.
He viewed creativity and innovation as key to staying ahead of the Soviets. Citing the invention of the atomic bomb , he argued that scientific work of such complexity and ambiguity could not have been managed using the heavy-handed techniques of the industrial age, which he likened to the centralized planning of the Soviet economy. Future industries, he suggested, would need to operate in “local” and “decentralized” ways.
To support his emphasis on knowledge-worker autonomy, Drucker introduced the idea of management by objectives, a process in which managers focus on setting out clear targets, but the details of how they’re accomplished are left to individuals. This idea is both extremely consequential and rarely debated. It’s why the modern office worker is inundated with quantified quarterly goals and motivating mission statements, but receives almost no guidance on how to actually organize and manage these efforts. It was thus largely owing to Drucker that, in 2004, when Merlin Mann found himself overwhelmed by his work, he took it for granted that the solution to his woes would be found in the optimization of his personal habits.
As the popularity of 43 Folders grew, so did Mann’s influence in the online productivity world. One breakthrough from this period was a novel organizational device that he called “the hipster PDA.” Pre-smartphone handheld devices, such as the Palm Pilot, were often described as “personal digital assistants”; the hipster P.D.A. was proudly analog. The instructions for making one were aggressively simple: “1. Get a bunch of 3x5 inch index cards. 2. Clip them together with a binder clip. 3. There is no step 3.” The “device,” Mann suggested, was ideal for implementing G.T.D.: the top index card could serve as an in-box, where tasks could be jotted down for subsequent processing, while colored cards in the stack could act as dividers to organize tasks by project or context. A 2005 article in the Globe and Mail noted that Ian Capstick, a press secretary for Canada’s New Democratic Party, wielded a hipster P.D.A. in place of a BlackBerry.
Just as G.T.D. was achieving widespread popularity, however, Mann’s zeal for his own practice began to fade. An inflection point in his writing came in 2007, soon after he gave a G.T.D.-inspired speech about e-mail management to an overflow audience at Google’s Mountain View headquarters. Building on the classic productivity idea that an office worker shouldn’t touch the same piece of paper more than once, Mann outlined a new method for rapidly processing e-mails. In this system, you would read each e-mail only once, then select from a limited set of options—delete it, respond to it, defer it (by moving it into a folder of messages requiring long responses), delegate it, or “do” it (by extracting and executing the activity at its core, or capturing it for later attention in a system like G.T.D.). The goal was to apply these rules mechanically until your digital message pile was empty. Mann called his strategy Inbox Zero. After Google uploaded a video of his talk to YouTube , the term entered the vernacular. Editors began inquiring about book deals.
Not long afterward, Mann posted a self-reflective essay on 43 Folders, in which he revealed a growing dissatisfaction with the world of personal productivity. Productivity pr0n, he suggested, was becoming a bewildering, complexifying end in itself—list-making as a “cargo cult,” system-tweaking as an addiction. “On more than a few days, I wondered what, precisely, I was trying to accomplish,” he wrote. Part of the problem was the recursive quality of his work. Refining his productivity system so that he could blog more efficiently about productivity made him feel as if he were being “tossed around by a menacing Rube Goldberg device” of his own design; at times, he said, “I thought I might be losing my mind.” He also wondered whether, on a substantive level, the approach that he’d been following was really capable of addressing his frustrations. It seemed to him that it was possible to implement many G.T.D.-inflected life hacks without feeling “more competent, stable, and alive.” He cleaned house, deleting posts. A new “About” page explained that 43 Folders was no longer a productivity blog but a “website about finding the time and attention to do your best creative work.” Mann’s posting slowed. In 2011, after a couple years of desultory writing, he published a valedictory essay titled “ Cranking ”—a rumination on an illness of his father’s, and a description of his own struggle to write a book about Inbox Zero after becoming disenchanted with personal productivity as a concept. “I’d type and type. I’d crank and I’d crank,” he recounted. “I’m done cranking. And, I’m ready to make a change.” After noting that his editor would likely cancel his book contract, he concluded with a bittersweet sign-off: “Thanks for listening, nerds.” There have been no posts on the site for the past nine years.
Even after the loss of one of its leaders, the productivity pr0n movement continued to thrive because the overload culture that had inspired it continued to worsen. G.T.D. was joined by numerous other attempts to tame excessive work obligations, from the bullet-journal method , to the explosion in smartphone-based productivity apps, to my own contribution to the movement, a call to emphasize “deep” work over “shallow.” But none of these responses solved the underlying problem.
The knowledge sector’s insistence that productivity is a personal issue seems to have created a so-called “tragedy of the commons” scenario, in which individuals making reasonable decisions for themselves insure a negative group outcome. An office worker’s life is dramatically easier, in the moment, if she can send messages that demand immediate responses from her colleagues, or disseminate requests and tasks to others in an ad-hoc manner. But the cumulative effect of such constant, unstructured communication is cognitively harmful: on the receiving end, the deluge of information and demands makes work unmanageable. There’s little that any one individual can do to fix the problem. A worker might send fewer e-mail requests to others, and become more structured about her work, but she’ll still receive requests from everyone else; meanwhile, if she decides to decrease the amount of time that she spends engaging with this harried digital din, she slows down other people’s work, creating frustration.
In this context, the shortcomings of personal-productivity systems like G.T.D. become clear. They don’t directly address the fundamental problem: the insidiously haphazard way that work unfolds at the organizational level. They only help individuals cope with its effects. A highly optimized implementation of G.T.D. might have helped Mann organize the hundreds of tasks that arrived haphazardly in his in-box daily, but it could do nothing to reduce the quantity of these requests.
There are ways to fix the destructive effects of overload culture, but such solutions would have to begin with a reëvaluation of Peter Drucker’s insistence on knowledge-worker autonomy. Productivity, we must recognize, can never be entirely personal. It must be connected to a system that we can study, analyze, and improve.
One of the few academics who has seriously explored knowledge-work productivity in recent years is Tom Davenport, a professor of information technology and management at Babson College. Many organizations claim to be interested in productivity, he told me, but they almost always pursue it by introducing new technology tools—spreadsheets, network applications, Web-based collaboration software—in piecemeal fashion. The general belief is that knowledge workers will never stand for intrusions into the autonomy they’ve come to expect. The idea of large-scale interventions that might replace the mess of unstructured messaging with a more structured set of procedures is rarely considered.
Although Davenport’s 2005 book, “ Thinking for a Living ,” attempted to offer concrete advice about how knowledge-worker productivity might be improved, in many places his advice is constrained by the assumed inviolability of autonomy. In one chapter, for example, he explores the possibility of routinizing or constraining the tasks of “transaction” workers, who perform similar duties over and over, by using a diagram to communicate an optimal sequence of actions. He adds, however, that such routinization simply won’t appeal to “expert” workers, who he says are unlikely to pay attention to elaborate flowcharts suggesting when they should collaborate and when they should leave each other alone. In the end, “Thinking for a Living” failed to find an audience. “It was one of my worst-selling books,” Davenport said. He soon shifted his attention to more popular topics, such as big data and artificial intelligence.
And yet, even if we accept that people don’t want to be micromanaged, it doesn’t follow that every single aspect of knowledge work must be left to the individual. If I’m a computer programmer, I might not want my project manager telling me how to solve a coding problem, but I would welcome clear-cut rules that limit the ability of other divisions to rope me into endless meetings or demand responses to never-ending urgent messages.
The benefits of top-down interventions designed to protect both attention and autonomy could be significant. In an article published in 1999, Drucker noted that, in the course of the twentieth century, the productivity of the average manual laborer had increased by a factor of fifty—the result, in large part, of an obsessive focus on how to conduct this work more effectively. By some estimates, knowledge workers in North America outnumber manual workers by close to four to one—and yet, as Drucker wrote, “Work on the productivity of the knowledge worker has barely begun.” Fittingly, we can derive a clear vision of a more productive future by returning to Merlin Mann. In the final years of 43 Folders, Mann began dabbling in podcasting. After shuttering his Web site, he turned his attention more fully toward this emerging medium. Mann now hosts four regular podcasts. One show, “Roderick on the Line,” consists of “unfiltered” conversations with Mann’s friend John Roderick, the lead singer of the band the Long Winters. Another show, “Back to Work,” tackles productivity, mixing some early 43 Folders-style exploration of digital tools with late 43 Folders-style digressions on the purpose of productivity. A recent episode of “Back to Work” combined a technical conversation about TaskPaper—a plain-text to-do-list software for Macs—with a metaphysical discussion about disruptions.
Mann no longer uses the full G.T.D. system. He remains a fan of David Allen (“there’s a person for whom G.T.D. is a perfect fit,” he told me), but the nature of his current work doesn’t generate the overwhelming load of obligations that first drove him to the system, back in 2004. “My needs are very modest from a task-management perspective,” he said. “I have a production schedule for the podcasts; it’s that and grocery lists.” He does still use some big ideas from G.T.D., such as deploying calendar notifications to remind him to water his plants and clean his cat’s litter box. (“Why would I let that take up any part of my brain?”) However, his day is now structured in such a way that he can spend most of his time focussed on the autonomous, creative, skilled work that Drucker identified as so crucial to growing our economy.
Most of us are not our own bosses, and therefore lack the ability to drastically overhaul the structure of our work obligations, but in Mann’s current setup there’s a glimpse of what might help. Imagine if, through some combination of new management thinking and technology, we could introduce processes that minimize the time required to talk about work or fight off random tasks flung our way by equally harried co-workers, and instead let us organize our days around a small number of discrete objectives. A way, that is, to preserve Drucker’s essential autonomy while sidestepping the uncontrollable overload that this autonomy can accidentally trigger. This vision is appealing, but it cannot be realized by individual actions alone. It will require management intervention.
Up until now, there has been little will to instigate this shift in responsibility for productivity from the person to the organization. As Davenport discovered, most knowledge-work companies have been more focussed on keeping up with technological breakthroughs that might open up new markets. To get more done, it’s been sufficient to simply exhort employees to work harder. Laptops and smartphones helped these efforts by enabling office workers to find extra hours in the day to get things done, providing a productivity counterbalance to the inefficiencies of overload culture. And then COVID -19 arrived.
In a remarkably short span, the spread of the coronavirus shut down offices around the world. This unexpected change amplified the inefficiencies latent in our haphazard approach to work. Many individuals responded by immersing themselves in a 43 Folders-style world of productivity hacks. As we attempt to juggle percolating crises, endless Zoom calls, and, for many, the requirement to somehow integrate both child care and homeschooling into the same hours, there’s a sudden, urgent need to carefully organize tasks and intricately synchronize schedules.
But it’s becoming clear that, as Mann learned, individual efforts are not enough. Although offices are now partially reopening, a significant amount of work will, for the foreseeable future, continue to be performed remotely. To survive the current crisis, knowledge-work companies may finally be forced to move past Drucker’s insistent autonomy and begin asking hard questions about how their work is actually accomplished.
It seems likely that any successful effort to reform professional life must start by making it easier to figure out who is working on what, and how it’s going. Because so much of our effort in the office now unfolds in rapid exchanges of digital messages, it’s convenient to allow our in-boxes to become an informal repository for everything we need to get done. This strategy, however, obscures many of the worst aspects of overload culture. When I don’t know how much is currently on your plate, it’s easy for me to add one more thing. When I cannot see what my team is up to, I can allow accidental inequities to arise, in which the willing end up overloaded and the unwilling remain happily unbothered. (For instance, in field tests led by Linda Babcock, of Carnegie Mellon University, women were found to take on a disproportionate load of “non-promotable” service tasks, such as organizing office parties, and to be more likely than men to say yes when asked to do so, leading to their being asked more often.) Consider instead a system that externalizes work. Following the lead of software developers, we might use virtual task boards, where every task is represented by a card that specifies who is doing the work, and is pinned under a column indicating its status. With a quick glance, you can now ascertain everything going on within your team and ask meaningful questions about how much work any one person should tackle at a time. With this setup, optimization becomes possible.
In software development, for example, it’s widely accepted that programmers are most effective when they work on one feature at a time, focussing in a distraction-free sprint until done. It’s conceivable that other knowledge fields might enjoy similar productivity boosts from more intentional assignments of effort. What if you began each morning with a status meeting in which your team confronts its task board? A plan could then be made about which handful of things each person would tackle that day. Instead of individuals feeling besieged and resentful—about the additional tasks that similarly overwhelmed colleagues are flinging their way—they could execute a collaborative plan designed to benefit everyone.
The ability to better visualize work would also enable smarter processes. If you notice that the influx of administrative demands from other parts of your company is overwhelming you and your co-workers, you’re now motivated to seek fixes. Such optimizations are unlikely to occur when the scope of the problem is hidden among in-box detritus, and when productivity is still understood as a matter of personal will.
Whether or not coronavirus-driven disruption provides the final push we need to move away from our flawed commitment to personal productivity, we can be certain that this transition will eventually happen. Even if we convince ourselves that the psychological toll of overload culture is acceptable collateral damage for a fast-paced modern world, there’s too much latent economic value at stake to keep ignoring the haphazard nature of how we currently work. It’s ironic that Drucker, the very person who extolled the potential of knowledge-worker productivity, helped plant the ideas that have since held it back. To move forward, we must step away from Drucker’s commitment to total autonomy—allowing for freedom in how we execute tasks without also allowing for chaos in how these tasks are assigned. We must, in other words, acknowledge the futility of trying to tame our frenzied work lives all on our own, and instead ask, collectively, whether there’s a better way to get things done.
More: Productivity Coronavirus Office Workers Technology E-Mail Goings On E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Cultural Comment By Carrie Battan Office Space By Cal Newport The New Yorker Interview with David Remnick Annals of Inquiry By Gideon Lewis-Kraus Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q.
Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info
" |
189 | 2,023 | "Can We Stop Runaway A.I.? | The New Yorker" | "https://www.newyorker.com/science/annals-of-artificial-intelligence/can-we-stop-the-singularity" | "Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Annals of Artificial Intelligence Can We Stop Runaway A.I.? By Matthew Hutson Facebook X Email Print Save Story Play/Pause Button Pause Illustration by Shira Inbar Save this story Save this story Save this story Save this story Increasingly, we’re surrounded by fake people. Sometimes we know it and sometimes we don’t. They offer us customer service on Web sites, target us in video games, and fill our social-media feeds; they trade stocks and, with the help of systems such as OpenAI’s ChatGPT , can write essays, articles, and e-mails. By no means are these A.I. systems up to all the tasks expected of a full-fledged person. But they excel in certain domains, and they’re branching out.
Many researchers involved in A.I. believe that today’s fake people are just the beginning. In their view, there’s a good chance that current A.I. technology will develop into artificial general intelligence, or A.G.I.—a higher form of A.I. capable of thinking at a human level in many or most regards. A smaller group argues that A.G.I.’s power could escalate exponentially. If a computer system can write code—as ChatGPT already can—then it might eventually learn to improve itself over and over again until computing technology reaches what’s known as “ the singularity ”: a point at which it escapes our control. In the worst-case scenario envisioned by these thinkers, uncontrollable A.I.s could infiltrate every aspect of our technological lives, disrupting or redirecting our infrastructure, financial systems, communications, and more. Fake people, now endowed with superhuman cunning, might persuade us to vote for measures and invest in concerns that fortify their standing, and susceptible individuals or factions could overthrow governments or terrorize populations.
The singularity is by no means a foregone conclusion. It could be that A.G.I. is out of reach, or that computers won’t be able to make themselves smarter.
But transitions between A.I., A.G.I., and superintelligence could happen without our detecting them; our A.I. systems have often surprised us. And recent advances in A.I. have made the most concerning scenarios more plausible. Large companies are already developing generalist algorithms: last May, DeepMind, which is owned by Google’s parent company, Alphabet, unveiled Gato, a “generalist agent” that uses the same type of algorithm as ChatGPT to perform a variety of tasks, from texting and playing video games to controlling a robot arm. “Five years ago, it was risky in my career to say out loud that I believe in the possibility of human-level or superhuman-level A.I.,” Jeff Clune, a computer scientist at the University of British Columbia and the Vector Institute, told me. (Clune has worked at Uber, OpenAI, and DeepMind; his recent work suggests that algorithms that explore the world in an open-ended way might lead to A.G.I.) Now, he said, as A.I. challenges “dissolve,” more researchers are coming out of the “A.I.-safety closet,” declaring openly that A.G.I. is possible and may pose a destabilizing danger to society. In March, a group of prominent technologists published a letter calling for a pause in some types of A.I. research, to prevent the development of “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us”; the next month, Geoffrey Hinton , one of A.I.’s foremost pioneers, left Google so that he could more freely talk about the technology’s dangers, including its threat to humanity.
A growing area of research called A.I. alignment seeks to lessen the danger by insuring that computer systems are “aligned” with human goals. The idea is to avoid unintended consequences while instilling moral values, or their machine equivalents, into A.I.s. Alignment research has shown that even relatively simple A.I. systems can break bad in bizarre ways. In a 2020 paper titled “ The Surprising Creativity of Digital Evolution ,” Clune and his co-authors collected dozens of real-life anecdotes about unintended and unforeseen A.I. behavior. One researcher aimed to design virtual creatures that moved horizontally, presumably by crawling or slithering; instead, the creatures grew tall and fell over, covering ground through collapse. An A.I. playing a version of tic-tac-toe learned to “win” by deliberately requesting bizarre moves, crashing its opponent’s program and forcing it to forfeit. Other examples of surprising misalignment abound. An A.I. tasked with playing a boat-racing game discovered that it could earn more points by motoring in tight circles and picking up bonuses instead of completing the course; researchers watched the A.I. boat “catching on fire, crashing into other boats, and going the wrong way” while pumping up its score. As our A.I. systems grow more sophisticated and powerful, these sorts of perverse outcomes could become more consequential. We wouldn’t want the A.I.s of the future, which might compute prison sentences, drive cars, or design drugs, to do the equivalent of failing in order to succeed.
Alignment researchers worry about the King Midas problem: communicate a wish to an A.I. and you may get exactly what you ask for, which isn’t actually what you wanted. (In one famous thought experiment , someone asks an A.I. to maximize the production of paper clips, and the computer system takes over the world in a single-minded pursuit of that goal.) In what we might call the dog-treat problem, an A.I. that cares only about extrinsic rewards fails to pursue good outcomes for their own sake. (Holden Karnofsky, a co-C.E.O. of Open Philanthropy, a foundation whose concerns include A.I. alignment, asked me to imagine an algorithm that improves its performance on the basis of human feedback: it could learn to manipulate my perceptions instead of doing a good job.) Human beings have evolved to pass on their genes, and yet people have sex “in ways that don’t cause more children to be born,” Spencer Greenberg, a mathematician and an entrepreneur, told me; similarly, a “superintelligent” A.I. that’s been designed to serve us could use its powers to pursue novel goals. Stuart Armstrong, a co-founder of the benefit corporation Aligned A.I., suggested that a superintelligent computer system that amasses economic, political, and military power could “hold the world hostage.” Clune outlined a more drawn-from-the-headlines scenario: “What would Vladimir Putin do right now if he was the only one with A.G.I.?” he asked.
Few scientists want to halt the advancement of artificial intelligence. The technology promises to transform too many fields, including science, medicine, and education. But, at the same time, many A.I. researchers are issuing dire warnings about its rise. “It’s almost like you’re deliberately inviting aliens from outer space to land on your planet, having no idea what they’re going to do when they get here, except that they’re going to take over the world,” Stuart Russell, a computer scientist at the University of California, Berkeley, and the author of “ Human Compatible ,” told me. Disturbingly, some researchers frame the A.I. revolution as both unavoidable and capable of wrecking the world. Warnings are proliferating, but A.I.’s march continues. How much can be done to avert the most extreme scenarios? If the singularity is possible, can we prevent it? Governments around the world have proposed or enacted regulations on the deployment of A.I. These rules address autonomous cars , hiring algorithms, facial recognition, recommendation engines, and other applications of the technology. But, for the most part, regulations haven’t targeted the research and development of A.I. Even if they did, it’s not clear that we’d know when to tap the brakes. We may not know when we’re nearing a cliff until it’s too late.
It’s difficult to measure a computer’s intelligence. Computer scientists have developed a number of tests for benchmarking an A.I.’s capabilities , but disagree about how to interpret them. Chess was once thought to require general intelligence, until brute-force search algorithms conquered the game; today, we know that a chess program can beat the best grand masters while lacking even rudimentary common sense.
Conversely, an A.I. that seems limited may harbor potential we don’t expect: people are still uncovering emergent capabilities within GPT-4, the engine that powers ChatGPT. Karnofsky, of Open Philanthropy, suggested that, rather than choosing a single task as a benchmark, we might gauge an A.I.’s intellect by looking at the speed with which it learns. A human being “can often learn something from just seeing two or three examples,” he said, but “a lot of A.I. systems need to see a lot of examples to learn something.” Recently, an A.I. program called Cicero mastered the socially and strategically complex board game Diplomacy. We know that it hasn’t achieved A.G.I., however, because it needed to learn partly by studying a data set of more than a hundred thousand human games and playing roughly half a million games against itself.
At the same time, A.I. is advancing quickly, and it could soon begin improving more autonomously. Machine-learning researchers are already working on what they call meta-learning, in which A.I.s learn how to learn. Through a technology called neural-architecture search, algorithms are optimizing the structure of algorithms. Electrical engineers are using specialized A.I. chips to design the next generation of specialized A.I. chips.
Last year, DeepMind unveiled AlphaCode , a system that learned to win coding competitions, and AlphaTensor , which learned to find faster algorithms crucial to machine learning. Clune and others have also explored algorithms for making A.I. systems evolve through mutation, selection, and reproduction.
In other fields, organizations have come up with general methods for tracking dynamic and unpredictable new technologies. The World Health Organization, for instance, watches the development of tools such as DNA synthesis, which could be used to create dangerous pathogens. Anna Laura Ross, who heads the emerging-technologies unit at the W.H.O., told me that her team relies on a variety of foresight methods, among them “Delphi-type” surveys, in which a question is posed to a global network of experts, whose responses are scored and debated and then scored again. “Foresight isn’t about predicting the future” in a granular way, Ross said. Instead of trying to guess which individual institutes or labs might make strides, her team devotes its attention to preparing for likely scenarios.
And yet tracking and forecasting progress toward A.G.I. or superintelligence is complicated by the fact that key steps may occur in the dark. Developers could intentionally hide their systems’ progress from competitors; it’s also possible for even a fairly ordinary A.I. to “lie” about its behavior.
In 2020, researchers demonstrated a way for discriminatory algorithms to evade audits meant to detect their biases; they gave the algorithms the ability to detect when they were being tested and provide nondiscriminatory responses. An “evolving” or self-programming A.I. might invent a similar method and hide its weak points or its capabilities from auditors or even its creators, evading detection.
Forecasting, meanwhile, gets you only so far when a technology moves fast. Suppose that an A.I. system begins upgrading itself by making fundamental breakthroughs in computer science. How quickly could its intelligence accelerate? Researchers debate what they call “takeoff speed.” In what they describe as a “slow” or “soft” takeoff, machines could take years to go from less than humanly intelligent to much smarter than us; in what they call a “fast” or “hard” takeoff, the jump could happen in months—even minutes. Researchers refer to the second scenario as “ FOOM ,” evoking a comic-book superhero taking flight. Those on the FOOM side point to, among other things, human evolution to justify their case. “It seems to have been a lot harder for evolution to develop, say, chimpanzee-level intelligence than to go from chimpanzee-level to human-level intelligence,” Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford and the author of “ Superintelligence ,” told me. Clune is also what some researchers call an “A.I. doomer.” He doubts that we’ll recognize the approach of superhuman A.I. before it’s too late. “We’ll probably frog-boil ourselves into a situation where we get used to big advance, big advance, big advance, big advance,” he said. “And think of each one of those as, That didn’t cause a problem, that didn’t cause a problem, that didn’t cause a problem. And then you turn a corner, and something happens that’s now a much bigger step than you realize.” What could we do today to prevent an uncontrolled expansion of A.I.’s power? Ross, of the W.H.O., drew some lessons from the way that biologists have developed a sense of shared responsibility for the safety of biological research. “What we are trying to promote is to say, Everybody needs to feel concerned,” she said of biology. “So it is the researcher in the lab, it is the funder of the research, it is the head of the research institute, it is the publisher, and, all together, that is actually what creates that safe space to conduct life research.” In the field of A.I., journals and conferences have begun to take into account the possible harms of publishing work in areas such as facial recognition. And, in 2021, a hundred and ninety-three countries adopted a Recommendation on the Ethics of Artificial Intelligence, created by the United Nations Educational, Scientific, and Cultural Organization ( UNESCO ). The recommendations focus on data protection, mass surveillance, and resource efficiency (but not computer superintelligence). The organization doesn’t have regulatory power, but Mariagrazia Squicciarini, who runs a social-policies office at UNESCO , told me that countries might create regulations based on its recommendations; corporations might also choose to abide by them, in hopes that their products will work around the world.
This is an optimistic scenario. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didn’t report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that it’s legitimate to take action. But, in A.I., there’s no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. “There will be no fire alarm that is not an actual running AGI,” Yudkowsky has written.
Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. Bostrom told me that he foresees a possible “race to the bottom,” with developers undercutting one another’s levels of caution. Earlier this year, an internal slide presentation leaked from Google indicated that the company planned to “ recalibrate ” its comfort with A.I. risk in light of heated competition.
International law restricts the development of nuclear weapons and ultra-dangerous pathogens. But it’s hard to imagine a similar regime of global regulations for A.I. development. “It seems like a very strange world where you have laws against doing machine learning, and some ability to try to enforce them,” Clune said. “The level of intrusion that would be required to stop people from writing code on their computers wherever they are in the world seems dystopian.” Russell, of Berkeley, pointed to the spread of malware: by one estimate, cybercrime costs the world six trillion dollars a year, and yet “policing software directly—for example, trying to delete every single copy—is impossible,” he said. A.I. is being studied in thousands of labs around the world, run by universities, corporations, and governments, and the race also has smaller entrants. Another leaked document attributed to an anonymous Google researcher addresses open-source efforts to imitate large language models such as ChatGPT and Google’s Bard. “We have no secret sauce,” the memo warns. “The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.” Even if a FOOM were detected, who would pull the plug? A truly superintelligent A.I. might be smart enough to copy itself from place to place, making the task even more difficult. “I had this conversation with a movie director,” Russell recalled. “He wanted me to be a consultant on his superintelligence movie. The main thing he wanted me to help him understand was, How do the humans outwit the superintelligent A.I.? It’s, like, I can’t help you with that, sorry!” In a paper titled “ The Off-Switch Game ,” Russell and his co-authors write that “switching off an advanced AI system may be no easier than, say, beating AlphaGo at Go.” It’s possible that we won’t want to shut down a FOOM ing A.I. A vastly capable system could make itself “indispensable,” Armstrong said—for example, “if it gives good economic advice, and we become dependent on it, then no one would dare pull the plug, because it would collapse the economy.” Or an A.I. might persuade us to keep it alive and execute its wishes. Before making GPT-4 public, OpenAI asked a nonprofit called the Alignment Research Center to test the system’s safety.
In one incident, when confronted with a CAPTCHA —an online test designed to distinguish between humans and bots, in which visually garbled letters must be entered into a text box—the A.I. contacted a TaskRabbit worker and asked for help solving it. The worker asked the model whether it needed assistance because it was a robot; the model replied, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.” Did GPT-4 “intend” to deceive? Was it executing a “plan”? Regardless of how we answer these questions, the worker complied.
Robin Hanson, an economist at George Mason University who has written a science-fiction-like book about uploaded consciousness and has worked as an A.I. researcher, told me that we worry too much about the singularity. “We’re combining all of these relatively unlikely scenarios into a grand scenario to make it all work,” he said. A computer system would have to become capable of improving itself; we’d have to vastly underestimate its abilities; and its values would have to drift enormously, turning it against us. Even if all of this were to happen, he said, the A.I wouldn’t be able “to push a button and destroy the universe.” Hanson offered an economic take on the future of artificial intelligence. If A.G.I. does develop, he argues, then it’s likely to happen in multiple places around the same time. The systems would then be put to economic use by the companies or organizations that developed them. The market would curtail their powers; investors, wanting to see their companies succeed, would go slow and add safety features. “If there are many taxi services, and one taxi service starts to, like, take its customers to strange places, then customers will switch to other suppliers,” Hanson said. “You don’t have to go to their power source and unplug them from the wall. You’re unplugging the revenue stream.” A world in which multiple superintelligent computers coexist would be complicated. If one system goes rogue, Hanson said, we might program others to combat it. Alternatively, the first superintelligent A.I. to be invented might go about suppressing competitors. “That is a very interesting plot for a science-fiction novel,” Clune said. “You could also imagine a whole society of A.I.s. There’s A.I. police, there’s A.G.I.s that go to jail. It’s very interesting to think about.” But Hanson argued that these sorts of scenarios are so futuristic that they shouldn’t concern us. “I think, for anything you’re worried about, you have to ask what’s the right time to worry,” he said. Imagine that you could have foreseen nuclear weapons or automobile traffic a thousand years ago. “There wouldn’t have been much you could have done then to think usefully about them,” Hanson said. “I just think, for A.I., we’re well before that point.” Still, something seems amiss. Some researchers appear to think that disaster is inevitable, and yet calls for work on A.I. to stop are still rare enough to be newsworthy; pretty much no one in the field wants us to live in the world portrayed in Frank Herbert’s novel “ Dune ,” in which humans have outlawed “thinking machines.” Why might researchers who fear catastrophe keep edging toward it? “I believe ever-more-powerful A.I. will be created regardless of what I do,” Clune told me; his goal, he said, is “to try to make its development go as well as possible for humanity.” Russell argued that stopping A.I. “shouldn’t be necessary if A.I.-research efforts take safety as a primary goal, as, for example, nuclear-energy research does.” A.I. is interesting, of course, and researchers enjoy working on it; it also promises to make some of them rich. And no one’s dead certain that we’re doomed. In general, people think they can control the things they make with their own hands. Yet chatbots today are already misaligned.
They falsify, plagiarize, and enrage, serving the incentives of their corporate makers and learning from humanity’s worst impulses. They are entrancing and useful but too complicated to understand or predict. And they are dramatically simpler, and more contained, than the future A.I. systems that researchers envision.
Let’s assume that the singularity is possible. Can we prevent it? Technologically speaking, the answer is yes—we just stop developing A.I. But, socially speaking, the answer may very well be no. The coördination problem may be too tough. In which case, although we could prevent the singularity, we won’t.
From a sufficiently cosmic perspective, one might feel that coexistence—or even extinction—is somehow O.K. Superintelligent A.I. might just be the next logical step in our evolution: humanity births something (or a collection of someones) that replaces us, just as we replaced our Darwinian progenitors. Alternatively, we might want humanity to continue, for at least a bit longer. In which case we should make an effort to avoid annihilation at the hands of superintelligent A.I., even if we feel that such an effort is unlikely to succeed.
That may require quitting A.I. cold turkey before we feel it’s time to stop, rather than getting closer and closer to the edge, tempting fate. But shutting it all down would call for draconian measures—perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote , in an editorial for Time , that we should “be willing to destroy a rogue datacenter by airstrike,” even at the risk of sparking “a full nuclear exchange.” That prospect is, in itself, quite scary. And yet it may be that researchers’ fear of superintelligence is surpassed only by their curiosity. Will the singularity happen? What will it be like? Will it spell the end of us? Humanity’s insatiable inquisitiveness has propelled science and its technological applications this far. It could be that we can stop the singularity—but only at the cost of curtailing our curiosity. ♦ More Science and Technology Saving the climate will depend on blue-collar workers. Can we train enough of them before time runs out ? There are ways of controlling A.I.—but first we need to stop mythologizing it.
A security camera for the entire planet.
What’s the point of reading writing by humans ? A heat shield for the most important ice on Earth.
The climate solutions we can’t live without.
Sign up for our daily newsletter to receive the best stories from The New Yorker.
More: Artificial Intelligence (A.I.) The Singularity Robots The Future Doom Daily E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Letter from Amsterdam By Patrick Radden Keefe Annals of Law By Eli Hager A Reporter at Large By Ariel Levy American Chronicles By Ronan Farrow Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q.
Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info
" |
190 | 2,023 | "JooHee Yoon’s “Drawing Hands with A.I. (After M. C. Escher)” | The New Yorker" | "https://www.newyorker.com/culture/cover-story/cover-story-2023-04-24" | "Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Cover Story JooHee Yoon’s “Drawing Hands with A.I. (After M. C. Escher)” By Françoise Mouly Art by JooHee Yoon Facebook X Email Print Save Story Save this story Save this story Save this story Save this story Chatbots and image generators, newly on the rise, have sparked our imaginations—and our fears. As artificial-intelligence machines sharpen their ability to translate written prompts into images that accurately capture both style and substance, some visual artists worry that their specialized skills might be rendered irrelevant. Even so, the new technologies at our disposal broaden our understanding of the relationship between artist and work. In her cover for the April 24 & May 1, 2023, Innovation & Tech Issue, her first for the magazine, JooHee Yoon addresses the topic in a clever image that illustrates the reciprocity and the tension that can exist between artists and these high-tech tools (is the robot hand drawing the real hand, or vice versa?). Yoon’s cover also demonstrates what makes artists unique: their ideas and their point of view.
M. C. Escher (1898-1972), a Dutch graphic artist whose approach was an inspiration for Yoon, is a case in point. Escher created many iconic works at the intersection of nature, mathematics, and perspective, using the unique language of the image to highlight a singular view of life’s puzzles and paradoxes. I talked to Yoon about inspiration, technology, artistic medium, and the impact of the new A.I. tools on real flesh-and-blood artists.
You found inspiration in M. C. Escher’s 1948 lithograph, “Drawing Hands.” What does this image mean to you? Escher is an artist I greatly admired when I was a kid. I remember I had a puzzle with this exact image on it, and his tessellation kept me mesmerized for hours on end. I think I was drawn not only to the marvellous precision in his drawings but also to the witty concepts inherent in many of his pieces. There was a period in my elementary-school years in which I was obsessed with optical illusions. Doing this New Yorker cover feels a little like coming full circle.
“Drawing Hands,” M. C. Escher, 1948.
You drew this image on paper and colored it digitally, combining traditional media with digital techniques. What do different techniques offer your work? I am a big proponent of using old-fashioned analog methods and using digital tools to support and enhance the image, rather than creating everything on the computer. I think the computer can be an amazing device, allowing for greater flexibility in editing and collage. But doing things by hand results in mistakes and a level of unpredictability that I value greatly. My use of the computer is very much influenced by my experiences with traditional media—it allows me to manipulate and edit images so that the elements all work in harmony. Without my background in creating screen prints and linocuts by hand, my understanding of color interaction and texture would be very different. There is also a very practical side to this combined method: I’ve freelanced for more than a decade, and the one thing I always wish I had more of is time. Some project deadlines can be as short as a few days, or even the same day, and my use of digital techniques partly grew from a need to be as efficient as possible.
You have mentioned that Saul Steinberg , who did many covers for the magazine, has influenced your work. How so? Saul Steinberg is one of my artist heroes, and that cat looking out the window is my definition of a perfect image. It’s just too good! Especially the collaged cat. The imperfection and the artist’s lack of fear in showing that the piece is edited make this image stand out to me. When I first saw that cover, I felt equal parts awe and jealousy. Although Steinberg can seem like a polar opposite of Escher in terms of style—with his loose, beautiful, and spontaneous way of drawing—I think they share a sharp wit and on-point concepts. If I can channel even a fraction of that energy into my work, I’ll be happy.
In addition to doing freelance work, you teach. How do you and your students view the recent developments in A.I., which allow anyone to create images by typing word cues? This has been a big topic of discussion in the illustration department, and across campus at the Rhode Island School of Design. Whether we’re discussing the ethical implications of using A.I., copyright issues, classroom policies on plagiarism, or future job security, the conversation has been equal parts fascinating and startling. The speed at which this technology is developing is astonishing, so right now I feel like I am still wrapping my brain around it. From my limited understanding, since the current image generators—like Midjourney and DALL-E —are text-to-image models, where you write in prompts to produce an image, it feels like a very different way of using your brain. Working with words to create art and working with your hands to create art seem like two separate activities to me.
The studio courses I teach are a direct extension of my freelance illustration practice, combining hands-on techniques such as printmaking, collage, and the Risograph, with the underpinning principle that the idea behind the image is important above all else. My students are tasked with coming up with the best visual interpretation to convey a concept, with emphasis on finding their voice as an illustrator. So much of artmaking is really getting to know yourself through the creative process, of making mistakes and going down rabbit holes of research and experimentation that sometimes work out—and sometimes don’t. But the failures are just as important as the successes, and it all contributes to a better understanding of oneself. Coming up with ideas is a very personal endeavor, stemming from one’s lived experience. It’s the seed that leads to the creation of art, whether it’s an image, a sculpture, a performance, a piece of music or writing. A.I. is a generalist by nature, scraping from all data available, so to me it seems like a fundamentally different approach. One of the things I tell my students is that it’s just as important to know what you don’t like to do in order to find that thing you truly enjoy doing. I think this self-discovery, of learning to know yourself, is where A.I. falls short and the human experience still prevails.
For more covers about technology, see below: “Future Generations,” by Daniel Clowes “Motherboard,” by Roz Chast “Tech Support,” by R. Kikuo Johnson Find covers, cartoons, and more at the Condé Nast Store.
New Yorker Favorites First she scandalized Washington. Then she became a princess.
The unravelling of an expert on serial killers.
What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too.
The meanings of the Muslim head scarf.
The slippery scams of the olive-oil industry.
Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker.
More: Cover Story Art Illustrations Artificial Intelligence Artists Goings On E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Cover Story By Françoise Mouly Cover Story By Françoise Mouly Onward and Upward with Technology By Anna Wiener Page-Turner By Audrey Wollen Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q.
Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info
" |
191 | 2,021 | "No one can find the animal that gave people covid-19 | MIT Technology Review" | "https://www.technologyreview.com/2021/03/26/1021263/bat-covid-coronavirus-cause-origin-wuhan" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts No one can find the animal that gave people covid-19 Here’s your guide to the WHO-China search for the origins of the coronavirus.
By Antonio Regalado archive page Ms Tech | Getty, Unsplash A wild-animal trader who caught a strange new virus from a frozen pangolin. A lab worker studying bat viruses who slipped up and sniffed the air under her biosafety hood. A man who suddenly fell ill after collecting bat guano from a cave to use for fertilizer.
Were any of these scenarios what touched off the covid-19 pandemic? That is the question facing a joint international research team appointed by China and the World Health Organization that is now searching for the source of covid-19. What the researchers know so far is that a coronavirus very similar to some found in horseshoe bats made the jump into humans, appeared in the Chinese city of Wuhan by December 2019, and from there ignited the biggest health calamity of the 21st century.
We also know they haven’t found the critical detail: if it was in fact a virus with an origin in horseshoe bats, how did it make its way into humans from creatures living hundreds of miles away in remote caves? A 300-page report from the group is expected soon. It is intended to summarize everything that’s known about the early days of the outbreak and the Chinese effort to locate its source, and it’s likely to forward a favored hypothesis: that the virus, SARS-CoV-2, reached humans from bats via “an intermediate host species,” such as a wild animal sold as food in Wuhan’s markets.
That’s a reasonable theory: other bat coronaviruses have jumped to humans the same way. In fact, it was the origin of SARS, a similar coronavirus that panicked the world in 2003 when it spread out of southern China and sickened 8,000 people. With SARS, researchers tested caged market animals and quickly found a nearl y identical virus in Himalayan palm civet cats and raccoon dogs, which are also eaten locally.
This time, though, the intermediate-host hypothesis has one big problem. More than a year after covid-19 began, no food animal has been identified as a reservoir for the pandemic virus. That’s despite efforts by China to test tens of thousands of animals, including pigs, goats, and geese, according to Liang Wannian, who leads the Chinese side of the research team. No one has found a “direct progenitor” of the virus, he says, and therefore the pandemic “ remains an unsolved mystery.” Politics at play It’s important to know how the pandemic started, because after killing more than 2.5 million people and causing trillions of dollars in economic losses , it’s not over. The virus may well be establishing itself in new species, like wild rabbits or even house pets. Learning how the pandemic began could help health experts avert the next one, or at least react more swiftly.
We know that the payoffs of origin hunting are real. After the 2003 SARS outbreak , researchers started building up a big knowledge base about this type of virus. That knowledge is what turbocharged the development process for vaccines against the new coronavirus in early 2020. One Chinese company, Sinovac Biotech , actually dusted off a 16-year-old vaccine design it had shelved after the SARS outbreak was contained.
But some fear that all the research into bat viruses may have backfired in a shocking way. These people point to a striking coincidence: the Wuhan Institute of Virology, the world epicenter of research on dangerous SARS-like bat coronaviruses, to which SARS-CoV-2 is related, is in the same city where the pandemic first broke loose. They suspect that covid-19 is the result of an accidental leak from the lab.
“It’s possible they caused a pandemic they were intending to prevent,” says Matthew Pottinger, a former deputy national security advisor at the White House. Pottinger, who was a journalist working in China during the original SARS outbreak, believes it is “very much possible that it did emerge from the laboratory” and that the Chinese government is loath to admit it. Pottinger says that is why Beijing’s joint research with the WHO “is completely insufficient as far as a credible investigation.” What’s certain is that the research to find the pandemic’s cause is politically charged because of the way it could assign blame for the global disaster. Since last spring, the hunt for the origin of what former president Donald Trump called the “China virus” has been in the crossfire of US-China trade battles and American charges that the WHO has played patsy for Beijing. China, meanwhile, has sought opportunities to spread responsibility. Chinese researchers have found ways to suggest that covid-19 started in Italy or that it arrived in Wuhan on frozen meat. This “cold chain” theory could cast the origin, and the blame, far beyond China’s borders.
One price of the politically charged atmosphere is that an entire year passed before WHO origins investigators got on the ground , arriving in January for a closely chaperoned trip. “It’s a year later, so you have to ask what took so long,” says Alan Schnur, a former WHO epidemiologist in China who helped track the original SARS outbreak. During that year, memories faded and so did antibodies, possibly erasing key clues.
Early clues The joint investigation team consists of 15 members appointed by the WHO alongside a Chinese contingent, with veterinarians as well as experts in epidemiology and food safety. “There is a popular perception of a group of Sherlock Holmeses going in with magnifying glasses and swabs,” John Watson, a senior British epidemiologist on the mission, said during a webinar organized by Chatham House in March. “But that is not how it was set up.” Instead, Beijing and the WHO agreed last summer to a series of scientific studies that were carried out in China. When the foreign members visited Wuhan in January, it was to help in a joint assessment of the evidence China had found, not to scour the city for new facts. “There was no freedom at all to wander around,” Watson has said.
Related Story For many scientists, challenging the idea that SARS-CoV-2 has natural origins is seen as career suicide. But a vocal few say it shouldn't be disregarded or lumped in with conspiracy theories.
According to Peter Ben Embarek, a WHO food safety official, the team’s two primary aims were to determine exactly when the outbreak started and then to learn how it emerged and jumped into the human population. To do that, he says, they relied on three types of data: genetic sequences of the virus, tests on animals, and epidemiological research into the earliest cases.
The reason finding the very first people with covid-19 is important is it would let disease sleuths look for shared factors, like jobs or habits. Did they all shop in the same stores? Were they recent travelers from out of town, or perhaps family members of laboratory scientists? In the original SARS, it quickly became clear that chefs and people handling animals were the first cases. More of them had antibodies to the virus, too. That demonstrated a connection to food animals, which was quickly confirmed when a team from Hong Kong found an almost identical virus in civets held in market cages.
What scientists back then didn’t know was the ultimate origin of the germ, which they figured out in the following years. First, they discovered that SARS-like viruses make their natural home in horseshoe bats.
And finally, in 2013, they found a virus that not only was very similar but also was capable of infecting humans. Shi Zhengli, the chief bat virus researcher at the Wuhan Institute of Virology, who was at the center of that work, called it the “ missing link” in the hunt for the origin of SARS.
The hunt this time is fundamentally different. A likely origin for covid-19 is already known: it’s very close to known bat viruses. Even before the outbreak started, the Wuhan Institute had studied one whose genetic code is 96% identical to SARS-CoV-2. That’s as good a match as the “missing link” found for the original SARS.
That means the burning question now isn’t so much the deep origin of the virus as how a such a pathogen would have ended up in the city of Wuhan.
A first step was to double-check that the outbreak really did start in Wuhan, not elsewhere. China undertook a fairly vast effort to see if covid-19 could have been spreading, unseen, any earlier than December 2019. Chinese researchers checked records of more than 200 hospitals around the country for suspicious pneumonias, tracked how much cough syrup pharmacies had sold, and tested 4,500 biospecimens stored before the outbreak, including blood samples that could be screened for antibodies. The WHO team says it even interviewed the office worker who, on December 8, 2019, became the first recognized covid-19 case in China.
So far, there is no evidence the outbreak went undetected elsewhere before the Wuhan cases. Genetic evidence also narrows the chance that the virus was spreading much earlier. Because of how the germ has accumulated mutations with time, it’s possible to estimate when it first started spreading between people. That data, too, points to a start date of late 2019.
About half the early cases, in December, had a link to the Huanan Wholesale Seafood Market, a maze of stalls selling frozen fish and some wild animals. That’s why animal markets are under suspicion. But the case is not airtight. The genetic evidence indicates that these cases are a branch of the early outbreak—that the market was a place where its spread was amplified, but not necessarily the starting point.
“The picture we see is a classical picture of an emerging outbreak, starting with a few sporadic cases, then seeing it spread in clusters, including in the Huanan market,” Ben Embarek said during a three-hour February press conference in Wuhan where the joint team reviewed its findings.
Ranking hypotheses That leaves the question of how, and where, the virus jumped to humans. During the same press conference, Ben Embarek and Liang, the leaders of the WHO-China team, laid out what they called four main hypotheses and ranked them, from least to most likely.
The first was that someone became directly infected by a bat or its guano. Because of how these viruses can attach to receptors on human cells, direct infection is a possibility. But direct transmission isn’t favored as the cause of the current pandemic. That’s because the bats harboring SARS-like viruses live many hundreds of miles from Wuhan. “Since Wuhan is not a city or environment close to these bats’ environment, a direct jump from bats is not very likely,” Ben Embarek said during the press event.
The researchers went on to dismiss the lab accident theory as “extremely unlikely,” saying they had agreed not to pursue it any further. Their reasoning was fairly simple: Chinese scientists at several Wuhan labs told them they had never seen the virus before and hadn’t worked on it. “There could be a leak of a virus, but it should be a known or existing virus,” Liang reasoned, according to a translator. “If it doesn't exist, there will be no way that this virus would be leaked.” That argument is not foolproof. Local labs were in the business of retrieving samples from bat caves and bringing them to Wuhan for study. That means researchers could have come into contact with unfamiliar viruses. Nor have the labs been entirely forthcoming about what viruses they do know about. The Wuhan Institute of Virology possesses gene information about similar viruses that it has not released publicly. Other information disappeared from view when the institute took a database offline.
Related Story One problem with the lab leak theory is that it presumes the Chinese are lying or hiding facts, a position incompatible with a joint scientific effort. This may have been why the WHO team, for instance, never asked to see the offline database. Peter Daszak, president of the EcoHealth Alliance, which collaborated with the Wuhan lab for many years and funded some of its work, says there is "no evidence" whatsoever to back the lab theory. “If you just firmly believe [that] what we hear from our Chinese colleagues over there in the labs is not going to be true, we will never be able to rule it out,” he said of the lab theory. “That is the problem. In its essence, that theory is not a conspiracy theory. But people have put it forward as such, saying the Chinese side conspired to cover up evidence.” To those who believe a lab accident is likely, including Jamie Metzl, a technology and national security fellow at the Atlantic Council, the WHO team isn't set up to carry out the sort of forensic probe he believes is necessary. “Everyone on earth is a stakeholder in this,” he says. “It’s crazy that a year into this, there is no full investigation into the origins of the pandemic.” In February, Metzl published a statement in which he said he was “appalled” by the investigators’ quick rebuttal of the lab hypothesis and called for Daszak to be removed from the team. Several days later, the WHO director general, Tedros Adhanom Ghebreyesus, appeared to rebuke the origins team in a speech in which he said, “I want to clarify that all hypotheses remain open and require further study.” The scenario the WHO-China team said it considers most probable is the “intermediary” theory, in which a bat virus infected another wild animal that was then caught or farmed for food. The intermediary theory does have the strongest precedents. Not only is there the case of SARS, but in 2012 researchers discovered Middle East respiratory syndrome (MERS), a deadly lung infection caused by another coronavirus, and quickly traced it to dromedary camels.
The trouble with this hypothesis is that Chinese researchers have not succeeded in finding a “direct progenitor” of this virus in any animal they’ve looked at. Liang said China had tested 50,000 animal specimens, including 1,100 bats in Hubei province, where Wuhan is located. But no luck: a matching virus still hasn’t been found.
The Chinese team appears to strongly favor a twist on the intermediate-animal idea: that the virus could have reached Wuhan on a frozen food shipment that included a frozen wild animal. This “cold chain” hypothesis may have appeal because it would mean the virus came from thousands of miles away, even outside China. “We think that is a valid option,” says Marion Koopmans, a Dutch virologist who traveled with the group. She said China had tested 1.5 million frozen samples and found the virus 30 times. “That may not be surprising in the middle of an outbreak, when many people are handling these products,” Koopmans says. “But the WHO did request studies, spiked the virus onto fish, froze and thawed it, and could culture the virus. So it’s possible. You cannot rule it out.” Blame game The WHO-China team, in its eventual report, is expected to suggest further research that needs to be carried out. This is one reason the report matters; it may determine which questions get asked and which don’t.
There is likely to be a larger effort to trace the wild-animal trade, including supply chains of frozen products. In addition to animal evidence, Ben Embarek also said China should make a greater effort to locate people who were infected by covid-19 early on, but perhaps were asymptomatic or didn’t get tested. That could be done by hunting through samples in blood banks, using newer, more sensitive technology to locate antibodies. “We need to keep looking for material that could give insight into the early days of the events,” Ben Embarek said. As well, the report is likely to call for the creation of a master database that includes all the data collected so far.
Ultimately, in seeking the cause of the covid-19 disaster, we don’t just want to know what happened. We’re also looking for something—or someone—to blame. And each hypothesis points to a different culprit. To ecologists, the lesson of the pandemic is nearly a foregone conclusion: humans should stop encroaching on wild areas. “We have come to recognize how this kind of investigation is not just about illness in humans—nor indeed just about an interface between humans and animals—but feeds into an altogether wider discussion about how we use the world,” says John Watson, the British epidemiologist.
The Chinese authorities, meanwhile, are already taking action on the intermediary theory by putting responsibility on wild-animal farmers and traders. Last February, according to NPR , China’s legislature started taking steps to “uproot the pernicious habit of eating wild animals.” At the behest of President Xi Jinping, they have already banned the hunting, trade, and consumption of a large number of “terrestrial wild animals,” a step never fully implemented after the original SARS outbreak. According to a report in Nature, the Chinese government has already closed 12,000 businesses, purged a million websites with information about wildlife trading, and banned the farming of bamboo rats and civets, among other species.
Then there is the chance covid-19 is the result of a laboratory accident. If that’s true, it would bring the sharpest consequences, especially for scientists like those in charge of finding the virus’s origin. If the pandemic was caused by ambitious, high-tech research on dangerous germs, it would mean China’s fast rise as a biotech powerhouse is a threat to the globe. It would mean this type of science should be severely restricted, or even banned, in China and everywhere else. More than any other hypothesis, a government-sponsored technology program run amok—along with early efforts to conceal news of the outbreak—would establish a case for retribution. “If this is a man-made catastrophe,” says Miles Yu, an analyst with the conservative Hudson Institute, “I think the world should seek reparations.” According to some former virus chasers, what’s actually in the WHO-China origins report may be different from what we’ve heard so far. Schnur says the Chinese probably already know much more than we think, so the role of the team could be to find ways to push those facts into the light. It is a process he calls “part diplomacy and part epidemiology.” He believes China’s investigation was likely very thorough and that the foreign visitors may also have stronger views than they have let on so far.
As he points out, “What you say in a press conference may be different than what you put in a report once you have left the country.” hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain.
By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative.
By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
192 | 2,011 | "The Rise and Fall of Bitcoin | WIRED" | "https://www.wired.com/2011/11/mf-bitcoin" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Benjamin Wallace Business The Rise and Fall of Bitcoin Illustration: Martin Venezky Save this story Save Save this story Save In November 1, 2008, a man named Satoshi Nakamoto posted a research paper to an obscure cryptography listserv describing his design for a new digital currency that he called bitcoin. None of the list's veterans had heard of him, and what little information could be gleaned was murky and contradictory. In an online profile, he said he lived in Japan. His email address was from a free German service. Google searches for his name turned up no relevant information; it was clearly a pseudonym. But while Nakamoto himself may have been a puzzle, his creation cracked a problem that had stumped cryptographers for decades. The idea of digital money—convenient and untraceable, liberated from the oversight of governments and banks—had been a hot topic since the birth of the Internet. Cypherpunks, the 1990s movement of libertarian cryptographers, dedicated themselves to the project. Yet every effort to create virtual cash had foundered. Ecash, an anonymous system launched in the early 1990s by cryptographer David Chaum, failed in part because it depended on the existing infrastructures of government and credit card companies. Other proposals followed—bit gold, RPOW, b-money—but none got off the ground.
One of the core challenges of designing a digital currency involves something called the double-spending problem. If a digital dollar is just information, free from the corporeal strictures of paper and metal, what's to prevent people from copying and pasting it as easily as a chunk of text, "spending" it as many times as they want? The conventional answer involved using a central clearinghouse to keep a real-time ledger of all transactions—ensuring that, if someone spends his last digital dollar, he can't then spend it again. The ledger prevents fraud, but it also requires a trusted third party to administer it.
The Rise and Fall of Bitcoin by Benjamin Wallace (41.9 MB .mp3) Bitcoin did away with the third party by publicly distributing the ledger, what Nakamoto called the "block chain." Users willing to devote CPU power to running a special piece of software would be called miners and would form a network to maintain the block chain collectively. In the process, they would also generate new currency. Transactions would be broadcast to the network, and computers running the software would compete to solve irreversible cryptographic puzzles that contain data from several transactions. The first miner to solve each puzzle would be awarded 50 new bitcoins, and the associated block of transactions would be added to the chain. The difficulty of each puzzle would increase as the number of miners increased, which would keep production to one block of transactions roughly every 10 minutes. In addition, the size of each block bounty would halve every 210,000 blocks—first from 50 bitcoins to 25, then from 25 to 12.5, and so on. Around the year 2140, the currency would reach its preordained limit of 21 million bitcoins.
When Nakamoto's paper came out in 2008, trust in the ability of governments and banks to manage the economy and the money supply was at its nadir. The US government was throwing dollars at Wall Street and the Detroit car companies. The Federal Reserve was introducing "quantitative easing," essentially printing money in order to stimulate the economy. The price of gold was rising. Bitcoin required no faith in the politicians or financiers who had wrecked the economy—just in Nakamoto's elegant algorithms. Not only did bitcoin's public ledger seem to protect against fraud, but the predetermined release of the digital currency kept the bitcoin money supply growing at a predictable rate, immune to printing-press-happy central bankers and Weimar Republic-style hyperinflation.
Bitcoin's chief proselytizer, Bruce Wagner, at one of the few New York City restaurants that accept the currency.
Photo: Michael Schmelling Nakamoto himself mined the first 50 bitcoins—which came to be called the genesis block—on January 3, 2009. For a year or so, his creation remained the province of a tiny group of early adopters. But slowly, word of bitcoin spread beyond the insular world of cryptography. It has won accolades from some of digital currency's greatest minds. Wei Dai, inventor of b-money, calls it "very significant"; Nick Szabo, who created bit gold, hails bitcoin as "a great contribution to the world"; and Hal Finney, the eminent cryptographer behind RPOW, says it's "potentially world-changing." The Electronic Frontier Foundation, an advocate for digital privacy, eventually started accepting donations in the alternative currency.
The small band of early bitcoiners all shared the communitarian spirit of an open source software project. Gavin Andresen, a coder in New England, bought 10,000 bitcoins for $50 and created a site called the Bitcoin Faucet, where he gave them away for the hell of it. Laszlo Hanyecz, a Florida programmer, conducted what bitcoiners think of as the first real-world bitcoin transaction, paying 10,000 bitcoins to get two pizzas delivered from Papa John's. (He sent the bitcoins to a volunteer in England, who then called in a credit card order transatlantically.) A farmer in Massachusetts named David Forster began accepting bitcoins as payment for alpaca socks.
When they weren't busy mining, the faithful tried to solve the mystery of the man they called simply Satoshi. On a bitcoin IRC channel, someone noted portentously that in Japanese Satoshi means "wise." Someone else wondered whether the name might be a sly portmanteau of four tech companies: SAmsung, TOSHIba, NAKAmichi, and MOTOrola. It seemed doubtful that Nakamoto was even Japanese. His English had the flawless, idiomatic ring of a native speaker.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Perhaps, it was suggested, Nakamoto wasn't one man but a mysterious group with an inscrutable purpose—a team at Google, maybe, or the National Security Agency. "I exchanged some emails with whoever Satoshi supposedly is," says Hanyecz, who was on bitcoin's core developer team for a time. "I always got the impression it almost wasn't a real person. I'd get replies maybe every two weeks, as if someone would check it once in a while. Bitcoin seems awfully well designed for one person to crank out." Nakamoto revealed little about himself, limiting his online utterances to technical discussion of his source code. On December 5, 2010, after bitcoiners started to call for Wikileaks to accept bitcoin donations, the normally terse and all-business Nakamoto weighed in with uncharacteristic vehemence. "No, don't 'bring it on,'" he wrote in a post to the bitcoin forum. "The project needs to grow gradually so the software can be strengthened along the way. I make this appeal to Wikileaks not to try to use bitcoin. Bitcoin is a small beta community in its infancy. You would not stand to get more than pocket change, and the heat you would bring would likely destroy us at this stage." Then, as unexpectedly as he had appeared, Nakamoto vanished. At 6:22 pm GMT on December 12, seven days after his Wikileaks plea, Nakamoto posted his final message to the bitcoin forum, concerning some minutiae in the latest version of the software. His email responses became more erratic, then stopped altogether. Andresen, who had taken over the role of lead developer, was now apparently one of just a few people with whom he was still communicating. On April 26, Andresen told fellow coders: "Satoshi did suggest this morning that I (we) should try to de-emphasize the whole 'mysterious founder' thing when talking publicly about bitcoin." Then Nakamoto stopped replying even to Andresen's emails. Bitcoiners wondered plaintively why he had left them. But by then his creation had taken on a life of its own.
Bitcoin 101 Bitcoin's economy consists of a network of its users' computers. At preset intervals, an algorithm releases new bitcoins into the network: 50 every 10 minutes, with the pace halving in increments until around 2140. The automated pace is meant to ensure regular growth of the monetary supply without interference by third parties, like a central bank, which can lead to hyperinflation.
To prevent fraud, the bitcoin software maintains a pseudonymous public ledger of every transaction. Some bitcoiners' computers validate transactions by cracking cryptographic puzzles, and the first to solve each puzzle receives 50 new bitcoins. Bitcoins can be stored in a variety of places—from a "wallet" on a desktop computer to a centralized service in the cloud.
Once users download the bitcoin app to their machine, spending the currency is as easy as sending an email. The range of merchants that accept it is small but growing; look for the telltale symbol at the cash register. And entrepreneurial bitcoiners are working to make it much easier to use the currency, building everything from point-of-service machines to PayPal alternatives.
Illustrations: Martin Venezky Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg __"Bitcoin enthusiast__s are almost evangelists," Bruce Wagner says. "They see the beauty of the technology. It's a huge movement. It's almost like a religion. On the forum, you'll see the spirit. It's not just me, me, me. It's what's for the betterment of bitcoin." It's a July morning. Wagner, whose boyish energy and Pantone-black hair belie his 50 years, is sitting in his office at OnlyOneTV, an Internet television startup in Manhattan. Over just a few months, he has become bitcoin's chief proselytizer. He hosts The Bitcoin Show , a program on OnlyOneTV in which he plugs the nascent currency and interviews notables from the bitcoin world. He also runs a bitcoin meetup group and is gearing up to host bitcoin's first "world conference" in August. "I got obsessed and didn't eat or sleep for five days," he says, recalling the moment he discovered bitcoin. "It was bitcoin, bitcoin, bitcoin, like I was on crystal meth!" Wagner is not given to understatement. While bitcoin is "the most exciting technology since the Internet," he says, eBay is "a giant bloodsucking corporation" and free speech "a popular myth." He is similarly excitable when predicting the future of bitcoin. "I knew it wasn't a stock and wouldn't go up and down," he explains. "This was something that was going to go up, up, up." For a while, he was right. Through 2009 and early 2010, bitcoins had no value at all, and for the first six months after they started trading in April 2010, the value of one bitcoin stayed below 14 cents. Then, as the currency gained viral traction in summer 2010, rising demand for a limited supply caused the price on online exchanges to start moving. By early November, it surged to 36 cents before settling down to around 29 cents. In February 2011, it rose again and was mentioned on Slashdot for achieving "dollar parity"; it hit $1.06 before settling in at roughly 87 cents.
In the spring, catalyzed in part by a much-linked Forbes story on the new "crypto currency," the price exploded. From early April to the end of May, the going rate for a bitcoin rose from 86 cents to $8.89. Then, after Gawker published a story on June 1 about the currency's popularity among online drug dealers, it more than tripled in a week, soaring to about $27. The market value of all bitcoins in circulation was approaching $130 million. A Tennessean dubbed KnightMB, who held 371,000 bitcoins, became worth more than $10 million, the richest man in the bitcoin realm. The value of those 10,000 bitcoins Hanyecz used to buy pizza had risen to $272,329. "I don't feel bad about it," he says. "The pizza was really good." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Perhaps bitcoin's creator wasn't one man but a mysterious group—a team at Google, maybe, or the NSA.Bitcoin was drawing the kind of attention normally reserved for overhyped Silicon Valley IPOs and Apple product launches. On his Internet talk show, journo-entrepreneur Jason Calacanis called it "a fundamental shift" and "one of the most interesting things I've seen in 20 years in the technology business." Prominent venture capitalist Fred Wilson heralded "societal upheaval" as the Next Big Thing on the Internet, and the four examples he gave were Wikileaks, PlayStation hacking, the Arab Spring, and bitcoin. Andresen, the coder, accepted an invitation from the CIA to come to Langley, Virginia, to speak about the currency. Rick Falkvinge, founder of the Swedish Pirate Party (whose central policy plank includes the abolition of the patent system), announced that he was putting his life savings into bitcoins.
The future of bitcoin seemed to shimmer with possibility. Mark Suppes, an inventor building a fusion reactor in a Brooklyn loft from eBay-sourced parts, got an old ATM and began retrofitting it to dispense cash for bitcoins. On the so-called secret Internet (the invisible grid of sites reachable by computers using Tor anonymizing software), the black-and-gray-market site Silk Road anointed the bitcoin the coin of the realm; you could use bitcoins to buy everything from Purple Haze pot to Fentanyl lollipops to a kit for converting a rifle into a machine gun. A young bitcoiner, The Real Plato, brought On the Road into the new millennium by video-blogging a cross-country car trip during which he spent only bitcoins. Numismatic enthusiasts among the currency's faithful began dreaming of collectible bitcoins, wondering what price such rarities as the genesis block might fetch.
As the price rose and mining became more popular, the increased competition meant decreasing profits. An arms race commenced. Miners looking for horsepower supplemented their computers with more powerful graphics cards, until they became nearly impossible to find. Where the first miners had used their existing machines, the new wave, looking to mine bitcoins 24 hours a day, bought racks of cheap computers with high-speed GPUs cooled by noisy fans. The boom gave rise to mining-rig porn, as miners posted photos of their setups. As in any gold rush, people recounted tales of uncertain veracity. An Alaskan named Darrin reported that a bear had broken into his garage but thankfully ignored his rig. Another miner's electric bill ran so high, it was said, that police raided his house, suspecting that he was growing pot.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Amid the euphoria, there were troubling signs. Bitcoin had begun in the public-interested spirit of open source peer-to-peer software and libertarian political philosophy, with references to the Austrian school of economics. But real money was at stake now, and the dramatic price rise had attracted a different element, people who saw the bitcoin as a commodity in which to speculate. At the same time, media attention was bringing exactly the kind of heat that Nakamoto had feared. US senator Charles Schumer held a press conference, appealing to the DEA and Justice Department to shut down Silk Road, which he called "the most brazen attempt to peddle drugs online that we have ever seen" and describing bitcoin as "an online form of money-laundering." Meanwhile, a cult of Satoshi was developing. Someone started selling I AM SATOSHI NAKAMOTO T-shirts. Disciples lobbied to name the smallest fractional denomination of a bitcoin a "satoshi." There was Satoshi-themed fan fiction and manga art. And bitcoiners continued to ponder his mystery. Some speculated that he had died. A few postulated that he was actually Wikileaks founder Julian Assange. Many more were convinced that he was Gavin Andresen. Still others believed that he must be one of the older crypto-currency advocates—Finney or Szabo or Dai. Szabo himself suggested it could be Finney or Dai. Stefan Thomas, a Swiss coder and active community member, graphed the time stamps for each of Nakamoto's 500-plus bitcoin forum posts; the resulting chart showed a steep decline to almost no posts between the hours of 5 am and 11 am Greenwich Mean Time. Because this pattern held true even on Saturdays and Sundays, it suggested that the lull was occurring when Nakamoto was asleep, rather than at work. (The hours of 5 am to 11 am GMT are midnight to 6 am Eastern Standard Time.) Other clues suggested that Nakamoto was British: A newspaper headline he had encoded in the genesis block came from the UK-published Times of London , and both his forum posts and his comments in the bitcoin source code used such Brit spellings as optimise and colour.
Play Dough Key moments in the short and volatile life of bitcoin.
Even the purest technology has to live in an impure world. Both the code and the idea of bitcoin may have been impregnable, but bitcoins themselves—unique strings of numbers that constitute units of the currency—are discrete pieces of information that have to be stored somewhere. By default, bitcoin kept users' currency in a digital "wallet" on their desktop, and when bitcoins were worth very little, easy to mine, and possessed only by techies, that was sufficient. But once they started to become valuable, a PC felt inadequate. Some users protected their bitcoins by creating multiple backups, encrypting and storing them on thumb drives, on forensically scrubbed virgin computers without Internet connections, in the cloud, and on printouts stored in safe-deposit boxes. But even some sophisticated early adopters had trouble keeping their bitcoins safe. Stefan Thomas had three copies of his wallet yet inadvertently managed to erase two of them and lose his password for the third. In a stroke, he lost about 7,000 bitcoins, at the time worth about $140,000. "I spent a week trying to recover it," he says. "It was pretty painful." Most people who have cash to protect put it in a bank, an institution about which the more zealous bitcoiners were deeply leery. Instead, for this new currency, a primitive and unregulated financial-services industry began to develop. Fly-by-night online "wallet services" promised to safeguard clients' digital assets. Exchanges allowed anyone to trade bitcoins for dollars or other currencies. Bitcoin itself might have been decentralized, but users were now blindly entrusting increasing amounts of currency to third parties that even the most radical libertarian would be hard-pressed to claim were more secure than federally insured institutions. Most were Internet storefronts, run by who knows who from who knows where.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Sure enough, as the price headed upward, disturbing events began to bedevil the bitcoiners. In mid-June, someone calling himself Allinvain reported that 25,000 bitcoins worth more than $500,000 had been stolen from his computer. (To this day, nobody knows whether this claim is true.) About a week later, a hacker pulled off an ingenious attack on a Tokyo-based exchange site called Mt. Gox, which handled 90 percent of all bitcoin exchange transactions. Mt. Gox restricted account withdrawals to $1,000 worth of bitcoins per day (at the time of the attack, roughly 35 bitcoins). After he broke into Mt. Gox's system, the hacker simulated a massive sell-off, driving the exchange rate to zero and letting him withdraw potentially tens of thousands of other people's bitcoins.
As it happened, market forces conspired to thwart the scheme. The price plummeted, but as speculators flocked to take advantage of the fire sale, they quickly drove it back up, limiting the thief's haul to only around 2,000 bitcoins. The exchange ceased operations for a week and rolled back the postcrash transactions, but the damage had been done; the bitcoin never got back above $17. Within a month, Mt. Gox had lost 10 percent of its market share to a Chile-based upstart named TradeHill. Most significantly, the incident had shaken the confidence of the community and inspired loads of bad press.
In the public's imagination, overnight the bitcoin went from being the currency of tomorrow to a dystopian joke. The Electronic Frontier Foundation quietly stopped accepting bitcoin donations. Two Irish scholars specializing in network analysis demonstrated that bitcoin wasn't nearly as anonymous as many had assumed: They were able to identify the handles of a number of people who had donated bitcoins to Wikileaks. (The organization announced in June 2011 that it was accepting such donations.) Nontechnical newcomers to the currency, expecting it to be easy to use, were disappointed to find that an extraordinary amount of effort was required to obtain, hold, and spend bitcoins. For a time, one of the easier ways to buy them was to first use Paypal to buy Linden dollars, the virtual currency in Second Life, then trade them within that make-believe universe for bitcoins. As the tone of media coverage shifted from gee-whiz to skeptical, attention that had once been thrilling became a source of resentment.
Illustration: Martin Venezky More disasters followed. Poland-based Bitomat, the third-largest exchange, revealed that it had—oops—accidentally overwritten its entire wallet. Security researchers detected a proliferation of viruses aimed at bitcoin users: Some were designed to steal wallets full of existing bitcoins; others commandeered processing power to mine fresh coins. By summer, the oldest wallet service, MyBitcoin, stopped responding to emails. It had always been fishy—registered in the West Indies and run by someone named Tom Williams, who never posted in the forums. But after a month of unbroken silence, Wagner, the New York City bitcoin evangelist, finally stated what many had already been thinking: Whoever was running MyBitcoin had apparently gone AWOL with everyone's money. Wagner himself revealed that he had been keeping all 25,000 or so of his bitcoins on MyBitcoin and had recommended to friends and relatives that they use it, too. He also aided a vigilante effort that publicly named several suspects. MyBitcoin's supposed owner resurfaced, claiming his site had been hacked. Then Wagner became the target of a countercampaign that publicized a successful lawsuit against him for mortgage fraud, costing him much of his reputation within the community. "People have the mistaken impression that virtual currency means you can trust a random person over the Internet," says Jeff Garzik, a member of bitcoin's core developer group.
And nobody had been as trusted as Nakamoto himself, who remained mysteriously silent as the world he created threatened to implode. Some bitcoiners began to suspect that he was working for the CIA or Federal Reserve. Others worried that bitcoin had been a Ponzi scheme, with Nakamoto its Bernie Madoff—mining bitcoins when they were worthless, then waiting for their value to rise. The most dedicated bitcoin loyalists maintained their faith, not just in Nakamoto, but in the system he had built. And yet, unmistakably, beneath the paranoia and infighting lurked something more vulnerable, an almost theodical disappointment. What bitcoiners really seemed to be asking was, why had Nakamoto created this world only to abandon it? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg If Nakamoto has forsaken his adherents, though, they are not prepared to let his creation die. Even as the currency's value has continued to drop, they are still investing in the fragile economy. Wagner has advocated for it to be used by people involved in the Occupy Wall Street movement. While the gold-rush phase of mining has ended, with some miners dumping their souped-up mining rigs—"People are getting sick of the high electric bills, the heat, and the loud fans," Garzik says—the more serious members of the community have turned to infrastructure. Mt. Gox is developing point-of-sale hardware. Other entrepreneurs are working on PayPal-like online merchant services. Two guys in Colorado have launched BitcoinDeals, an etailer offering "over 1,000,000 items." The underworld's use of the bitcoin has matured, too: Silk Road is now just one of many Tor-enabled back alleys, including sites like Black Market Reloaded, where self-proclaimed hit men peddle contract killings and assassinations.
"You could say it's following Gartner's Hype Cycle," London-based core developer Amir Taaki says, referring to a theoretical technology-adoption-and-maturation curve that begins with a "technology trigger," ascends to a "peak of inflated expectations," collapses into a "trough of disillusionment," and then climbs a "slope of enlightenment" until reaching a "plateau of productivity." By this theory, bitcoin is clambering out of the trough, as people learn to value the infallible code and discard the human drama and wild fluctuations that surround it.
But that distinction is ultimately irrelevant. The underlying vulnerabilities that led to bitcoin's troubles—its dependence on unregulated, centralized exchanges and online wallets—persist. Indeed, the bulk of mining is now concentrated in a handful of huge mining pools, which theoretically could hijack the entire network if they worked in concert.
Beyond the most hardcore users, skepticism has only increased. Nobel Prize-winning economist Paul Krugman wrote that the currency's tendency to fluctuate has encouraged hoarding. Stefan Brands, a former ecash consultant and digital currency pioneer, calls bitcoin "clever" and is loath to bash it but believes it's fundamentally structured like "a pyramid scheme" that rewards early adopters. "I think the big problems are ultimately the trust issues," he says. "There's nothing there to back it up. I know the counterargument, that that's true of fiat money, too, but that's completely wrong. There's a whole trust fabric that's been established through legal mechanisms." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It would be interesting to know what Nakamoto thinks of all this, but he's not talking. He didn't respond to emails, and the people who might know who he is say they don't. Andresen flatly denies he is Nakamoto. "I don't know his real name," he says. "I'm hoping one day he decides not to be anonymous anymore, but I expect not." Szabo also denies that he is Nakamoto, and so does Dai. Finney, who has blogged eloquently about being diagnosed with amyotrophic lateral sclerosis, sent his denial in an email: "Under my current circumstances, facing limited life expectancy, I would have little to lose by shedding anonymity. But it was not I." Both The New Yorker and Fast Company have launched investigations but ended up with little more than speculation.
The signal in the noise, the figure that emerges from the carpet of clues, suggests an academic with somewhat outdated programming training. (Nakamoto's style of notation "was popular in the late '80s and early '90s," Taaki notes. "Maybe he's around 50, plus or minus 10 years.") Some conjecturers are confident in their precision. "He has at best a master's," says a digital-currency expert. "It seems quite obvious it's one of the developers. Maybe Gavin, just looking at his background." "I suspect Satoshi is a small team at a financial institution," whitehat hacker Dan Kaminsky says. "I just get that feeling. He's a quant who may have worked with some of his friends." But Garzik, the developer, says that the most dedicated bitcoiners have stopped trying to hunt down Nakamoto. "We really don't care," he says. It's not the individuals behind the code who matter, but the code itself. And while people have stolen and cheated and abandoned the bitcoiners, the code has remained true.
Benjamin Wallace ( benwallace@me.com ) wrote about scareware in issue 19.10.
Topics 19.12 bitcoin features mp3 Andy Greenberg Joel Khalili Kari McMahon Brandi Collins-Dexter Steven Levy Steven Levy Peter Guest K.G. Orphanides Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
193 | 2,022 | "At TED, Elon Musk Revealed Why He Has to Own Twitter | WIRED" | "https://www.wired.com/story/elon-musk-ted-twitter-takeover" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gideon Lichfield Business Elon Musk’s Truth Photograph: Trevor Cokley/Alamy Save this story Save Save this story Save To enter the conference hall at TED, the world’s premier jamboree of futuristic optimism, is to enter what feels like a bubble floating in space: a tiered theater built inside a giant, darkened ballroom and suffused with reds and blues and pinks and purples, at the center of which a succession of extraordinary people tell extraordinary stories of extraordinary accomplishments against extraordinary odds.
Even amid this procession of heroes—and many are genuine heroes—Elon Musk commands a special kind of worship among TED acolytes. It is not hard to see why. The CEO of Tesla and SpaceX embodies TED’s fascination with big dreams and impossible odds. And his interview at the conference’s final session on April 14 surely reinforced that devotion. After quizzing Musk on his proposal to buy Twitter and the prospects for fighting climate change, TED curator Chris Anderson cued up a video clip of Musk on Saturday Night Live making fun of himself for his subdued, flat affect as “the first person with Asperger’s” to host the show, and asked him what it had been like to grow up with the syndrome.
Carefully staged as the moment seemed, it was effective. The world’s richest man described a lonely childhood in which he struggled to grasp social cues and implicit meanings. “Others could intuitively understand what is meant by something,” he said. “I just took things very literally, that the words spoken were exactly what they meant.” Taking refuge from the confusing duplicity of human beings, he became “absolutely obsessed with truth,” and pursued the study of physics, computer science, and information theory “in an attempt to understand the truths of the universe.” In another part of the interview, Musk said, “The truth matters to me a lot … almost pathologically matters to me.” Floating there inside the TED bubble, I felt a sudden empathy. I don’t have Asperger’s, but I too was a lonely child who didn’t understand other people and sought truth in science and computing instead. No doubt many of the overachieving geeks who attend TED could identify too.
But there are truths of the universe, and then there are truths of Musk. As he is wont to do, he rewrote history during the interview, asserting that an infamous 2018 tweet in which he claimed to have “funding secured” to take Tesla private, and from which he then had to back down after an SEC investigation, was nonetheless true; he had been “forced” to retract it and settle with the SEC to keep money flowing from Tesla’s banks and ward off short sellers. He talked about three years spent “sleeping on the floor” in the Tesla factory to show his solidarity with employees, glossing over the frequent stories of his verbal abuse and allegations of racism at the factory in California.
And his vision for Twitter, as he haltingly outlined on the stage, is of a platform on which not truth, but freedom is the most important value. Speech should be “as free as possible,” he said, and aside from speech that might be illegal, like direct incitements to violence, he was hazy on where the boundary should lie. To Musk, this freedom is nothing less than an existential need. “Having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization,” he said, to scattered applause.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A Twitter on which anything legal can be said might be “broadly inclusive,” but “maximally trusted”? More likely, it would just become an even greater cesspool of misinformation and abuse than it is already. In Musk’s idealized version of the platform, his power to spread his version of the truth is uncurtailed—amplified by his enormous band of Twitter followers. Spam and bot accounts would, he said, be banished (he didn’t explain how), but presumably not the armies of real-life Musk fans who leap to attack and sometimes threaten his critics.
Musk’s interest in Twitter may ultimately have less to do with his attitude to truth than his martyr complex, which cropped up repeatedly in the interview. His electric car business will help save the planet: If humanity builds enough renewable energy, stationary batteries, and EVs, he said, “we have a sustainable energy future.” His bid to colonize Mars, he has often said, is essential to save humanity from potential extinction. Even his story about working himself nearly to death at the Tesla factory portrayed him as making the ultimate sacrifice: “It was three years of hell … three years of the most excruciating pain in my life … It had to be done, or Tesla would be dead.” This, then, is Musk’s truth: that everything he does, even buying Twitter, is uniquely vital to humanity. And within the TED bubble, there is no greater aspiration.
📩 The latest on tech, science, and more: Get our newsletters ! The cyclone that changed the course of the Cold War So you've binge-played the perfect game.
Now what? Russia inches toward its splinternet dream This at-home computer setup is practically perfect Spreadsheets are hot—and cranking out complex code 👁️ Explore AI like never before with our new database ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Contributor X Topics Elon Musk twitter Tesla Social Media Amit Katwala Will Knight Andy Greenberg Joel Khalili Kari McMahon David Gilbert Joel Khalili Amit Katwala Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
194 | 2,020 | "How North Korean Hackers Rob Banks Around the World | WIRED" | "https://www.wired.com/story/how-north-korea-robs-banks-around-world" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Ben Buchanan Backchannel How North Korean Hackers Rob Banks Around the World Play/Pause Button Pause Illustration: WIRED Staff; Getty Images Save this story Save Save this story Save The bills are called supernotes. Their composition is three-quarters cotton and one-quarter linen paper, a challenging combination to produce. Tucked within each note are the requisite red and blue security fibers. The security stripe is exactly where it should be and, upon close inspection, so is the watermark. Ben Franklin’s apprehensive look is perfect, and betrays no indication that the currency, supposedly worth $100, is fake.
Most systems designed to catch forgeries fail to detect the supernotes. The massive counterfeiting effort that produced these bills appears to have lasted decades. Many observers tie the fake bills to North Korea, and some even hold former leader Kim Jong-Il personally responsible, citing a supposed order he gave in the 1970s, early in his rise to power. Fake hundreds, he reasoned, would simultaneously give the regime much-needed hard currency and undermine the integrity of the US economy. The self-serving fraud was also an attempt at destabilization.
At its peak, the counterfeiting effort apparently yielded at least $15 million per year for the North Korean government, according to the Congressional Research Service.
The bills ended up all over the world, allegedly distributed by an aging Irish man and laundered through a small bank in Macau. The North Koreans are believed to have supplemented the forging program with other illicit efforts. These ranged from trafficking opiates and methamphetamines to selling knockoff Viagra and even smuggling parts of endangered animals in secure diplomatic pouches.
All told, the Congressional Research Service estimates that the regime at one point netted more than $500 million per year from its criminal activities.
Excerpted from The Hacker and the State, by Ben Buchanan.
Buy on Amazon.
Courtesy of Harvard University Press During the first decade of the 2000s, the US made great progress in thwarting North Korea’s illicit behavior, especially its counterfeiting operation. A law enforcement campaign stretching to 130 countries infiltrated the secret trafficking circles and turned up millions of dollars in bogus bills. In one dramatic scene, authorities staged a wedding off the coast of Atlantic City, New Jersey, to lure suspects and arrest them when they showed up. The US Treasury Department also deployed its expanded Patriot Act powers, levying financial sanctions on the suspect bank in Macau and freezing $25 million in assets.
The wide-reaching American operation seemed to work. By 2008, the prevalence of supernotes had declined dramatically. One FBI agent involved in the US effort offered an explanation to Vice : “If the supernotes have stopped showing up, I’d venture to say that North Korea quit counterfeiting them. Perhaps they’ve found something else that’s easier to counterfeit after they lost the distribution network for the supernote.” Under pressure from American investigators, and challenged by a 2013 redesign of the $100 bill, the North Koreans moved on to newer tricks for illicitly filling their coffers.
It should be no surprise that hacking would be one of these. As The New York Times has reported , North Korean leadership has taken care to identify promising young people and get them computer science training in China or even—undercover as diplomats to the United Nations—in the States. Once trained, the North Koreans often live abroad, frequently in China, as they carry out their cyber operations. This gives them better internet connectivity and more plausible deniability of North Korean government ties, while still keeping them out of the reach of US law enforcement.
These North Korean hackers have carried out a systematic effort to target financial institutions all over the world. Their methods are bold, though not always successful. In their most profitable operations, they have manipulated how major financial institutions connect to the international banking system. By duping components of this system into thinking their hackers are legitimate users, they have enabled the transfer of tens of millions of dollars into accounts they control. They have tampered with log files and bank transaction records, prompting a flurry of security alerts and upgrades in international financial institutions. Most publicly, and perhaps by accident, the hackers have disrupted hundreds of thousands of computers around the world in a ham-fisted effort to hold valuable data for ransom. Through their successes and failures, they learned to modify and combine their tricks, evolving their operations to be more effective.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Even with a mixed track record, these attempts at manipulating the global financial system have literally paid off. The bounties from North Korean hacking campaigns are huge; the United Nations estimated the total haul at $2 billion, a large sum for a country with a gross domestic product of only about $28 billion. As North Korea continues to develop nuclear weapons and intercontinental ballistic missiles, cyberoperations help fund the regime. The scale of these operations is tremendous, at least relative to their past illicit efforts. Hackers now turn a far larger profit than the supernotes ever could.
But, as with the supernotes, the potential value of financial manipulation for North Korea goes at least somewhat beyond profit-seeking. If successful, it would also at least somewhat undermine the integrity of worldwide markets by deleting transaction records and distorting financial truth. Such tactics are tempting for government agencies but carry enormous risk. In the run-up to the Iraq War, The New York Times reported that the US considered draining Saddam Hussein’s bank accounts, but decided against it , fearful of crossing a Rubicon of state-sponsored cyber fraud that would harm the American economy and global stability. In 2014, President Barack Obama’s NSA review commission argued that the US should pledge never to hack and manipulate financial records. To do so, it said, would have a tremendously negative impact on trust in the global economic system.
Bank robbery is a terrible idea. Not only is it illegal, but it also yields an awful return on investment. In the US, the average bank robbery nets around $4,000 in cash, and the average bank robber pulls off only three heists before getting caught. Prospects are a little better overseas, but not much. Strikingly bold capers, like the 2005 theft at Banco Central in Brazil that required months of secretive tunnel-digging, can fetch tens of millions of dollars, but the vast majority of significant attempts end in catastrophic failure.
North Korean operatives found a better way to rob banks. They did not have to break through reinforced concrete or tunnel under vaults to get at the money, and they had no need to use force or threats. Instead, they simply duped the bank’s computers into giving it away. To do this, they set their sights on a core system in international business called the Society for Worldwide Interbank Financial Telecommunication, or SWIFT. The SWIFT system has been around since the 1970s. Its 11,000 financial institutions in more than 200 countries process tens of millions of transactions per day. The daily transfers total trillions of dollars, more than the annual gross domestic product of most countries. Many financial institutions in the SWIFT system have special user accounts for custom SWIFT software to communicate their business to other banks all over the world. Analyses from the cybersecurity firms BAE Systems and Kaspersky , as well as reporting in Wired , provide evidence for how the North Koreans targeted these accounts.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Central Bank of Bangladesh stores some of its money in the Federal Reserve Bank of New York, which the Central Bank uses for settling international transactions. On February 4, 2016, the Bangladeshi bank initiated about three dozen payments. Per the transfer requests sent over the SWIFT system, the bank wanted some of its New York money, totaling almost $1 billion, moved to a series of other accounts in Sri Lanka and the Philippines.
Around the same time and halfway across the world, a printer inside the Central Bank of Bangladesh stopped working. The printer was an ordinary HP LaserJet 400, located in a windowless, 12- by 8-foot room. The device had one very important job: Day and night, it automatically printed physical records of the bank’s SWIFT transactions. When employees arrived on the morning of February 5, they found nothing in the printer’s output tray.
They tried to print manually, but found they could not; the computer terminal connected to the SWIFT network generated an error message saying it was missing a file. The employees were now blind to transactions taking place at their own bank. The silent printer was the dog that did not bark—a sign that something was deeply wrong, but not immediately recognized as such.
This was not an ordinary machine failure. Instead, it was the culmination of shrewd North Korean preparation and aggressiveness. The hackers’ clever move was to target not the SWIFT system itself, but the machine through which the Bangladeshis connected to it. The special accounts used by the Central Bank of Bangladesh to interact with the system had enormous power, including the capacity to create, approve, and submit new transactions. By focusing their espionage on the bank’s network and users, the hackers were eventually able to gain access to these accounts.
It took time to figure out how the Bangladeshis connected to the SWIFT system and to get access to their credentials. Yet even as the hackers were moving through the bank’s network and preparing their operation—a process that took months—the Central Bank of Bangladesh failed to detect them. In part, this was because the bank was not looking very hard. After the hack, according to Reuters , a police investigation identified several shoddy security practices, including cheap equipment and a lack of security software, which made it easier for hackers to reach sensitive computers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Once the hackers gained access to the bank’s SWIFT accounts, they could initiate transactions just like any authorized user. To further avoid detection, they wrote special malicious code to bypass the internal antifraud checks in SWIFT software. Worse still, they manipulated transaction logs, making it harder to figure out where the bank’s money was going and casting doubt on the veracity of the logs upon which this, and every, high-volume financial institution depends. The North Korean strike against these logs was a dagger to the heart of the system. They sidelined the printer with additional malicious code, buying themselves time while the system processed their illicit transfer requests.
The hackers thus sent their payment requests to New York unbeknownst to anyone in Bangladesh. But employees at the New York Fed realized something was amiss. When they noticed the sudden batch of Bangladeshi transactions, they thought it was unusual that many of the receiving accounts were private entities, not other banks. They questioned dozens of the transfers and sent requests for clarification back.
It was not until the Bangladeshis managed to get their computer systems working again that they realized the severity of the situation. The newly repaired printer spit out the backlog of transaction records, including many that immediately looked suspicious. By the time the central bankers urgently reached out to their counterparts in New York, it was too late. The weekend had come, and the American workers had gone home; the North Korean hackers had either gotten very lucky with the timing of their operation or had planned it remarkably well. The Bangladeshi bankers had to sweat out the days until the Fed staff came back to work.
Monday brought mixed news. On the positive side was that vigilant New York Fed analysts had stopped most of the transactions, totaling more than $850 million. This included one $20 million transfer request with an especially odd intended recipient: the “Shalika Fandation” in Sri Lanka. It appears the hackers intended to write “Shalika Foundation,” though no nonprofit by that name, even properly spelled, seems to exist. To the extent that this typo helped alert analysts to the fraud, it must count as one of the most expensive in history, at least for the hackers.
The bad news was that four transactions had gone through. The transactions sent a total of $81 million to accounts at Rizal Bank in the Philippines. They were less fortunate with Rizal Bank, which had already placed the money in several accounts tied to casinos. Someone, acting as a so-called money mule, had made withdrawals from these accounts on February 5 and February 9—the latter even after the Bangladeshis had warned Rizal Bank of the fraud. (The bank did not respond to requests for comment.) Of the $81 million sent to the Rizal accounts, according to a lawsuit, only $68,356 remained. The rest was gone.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Investigators from the British firm BAE Systems began tracking the bank hackers and uncovered several important clues that identified the North Koreans as perpetrators. They linked some of the code used in the Bangladesh intrusion to earlier North Korean hacks, most notably the 2014 operation against Sony. The investigation reached a clear verdict: From a world away, and from the comfort of their homes and offices, North Korea’s hackers had manipulated transaction records, exploited the system of interbank trust, and pulled off one of the biggest bank heists in history.
As remarkable as the Bangladesh operation was, it was just one part of what was eventually recognized as a worldwide campaign. A parallel target of that campaign was a Southeast Asian bank that has not been named in public. In this second operation, the hackers followed a series of fairly well-orchestrated steps. They appear to have initially compromised their target via the server that hosted the bank’s public-facing website.
In December 2015, they expanded their malicious presence from that server to a different server within the bank. This one ran the powerful SWIFT software that connected the bank to the global financial system. The next month, the hackers deployed additional tools to begin moving within the target network and positioning malicious code to interact with the SWIFT system. On January 29, 2016, the hackers tested some of these tools. They did so almost precisely at the same time that they performed similar activity in their Bangladesh operation.
On February 4, just as the hackers began initiating payment requests in Bangladesh, they also manipulated the Southeast Asian bank’s SWIFT software. However, unlike in the parallel Bangladesh campaign, they did not yet initiate any fraudulent transactions. Slightly more than three weeks after that, the hackers caused a halt in operations at the second bank. Little is known about the circumstances surrounding this disruption.
Even after they took the money from the Central Bank of Bangladesh, the hackers kept up their focus on their second target. In April, they deployed keylogging software to the bank’s SWIFT server, presumably to gain additional credentials to the most powerful user accounts. These credentials, the keys to the bank’s SWIFT kingdom, would be essential to stealing money.
But by now the world of international banking sensed danger, in part aided by BAE’s investigation. SWIFT released new security updates in May in response to the alarm surrounding the Bangladesh incident and worries about the integrity of the financial system. The hackers would have to circumvent these updates to carry out their mission. By July, they began testing new malicious code for that purpose. In August, they once again began deploying code against the bank’s SWIFT server, presumably with the goal of soon transferring funds.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It was here that, despite all their careful testing and deployment of malicious code, the North Koreans hit a fatal snag: The Southeast Asian bank was better prepared and better defended than the Bangladeshi one had been. In August 2016, more than seven months after the hackers had made their initial entry, the bank found the breach. They hired Kaspersky, the high-profile Russian cybersecurity company, to investigate. The hackers, realizing that investigators were in hot pursuit and acting quickly to shut down the operation against the bank, deleted a large number of files to cover their tracks, but missed some. This mistake allowed Kaspersky to discover that much of the malicious code overlapped with that used in the bank hacking incident in Bangladesh.
BAE Systems’ and Kaspersky’s investigations brought the contours of North Korea’s campaign into view. It had ambitions much larger than just the two banks. Notably, in January 2017, the North Koreans compromised a Polish financial regulator’s systems and caused it to serve malicious code to any visitors to its websites, many of which were financial institutions. The North Koreans preconfigured that malicious code to act against more than 100 institutions from all over the world, primarily banks and telecommunications companies. The list of targets included the World Bank, central banks from countries such as Brazil, Chile, and Mexico, and many other prominent financial firms.
Nor did the North Koreans limit themselves to seeking out traditional currencies. Their campaign included a series of efforts to steal increasingly valuable cryptocurrencies like bitcoin from unsuspecting users all over the world. They also targeted a significant number of bitcoin exchanges, including a major one in South Korea known as Youbit. In that case, the exchange lost 17 percent of its financial assets to North Korean hackers, though it refused to specify how much that amounted to in absolute terms.
One estimate from Group-IB, a cybersecurity company, pegged North Korea’s profit from some of their little-noticed operations against cryptocurrency exchanges at more than $500 million. While it is impossible to confirm this estimate or the details of the hacks on cryptocurrency exchanges, the size of the reported loss emphasizes the degree to which the North Koreans have plundered smaller and more private financial institutions, almost entirely out of view.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The cybersecurity companies reached a consensus: The North Koreans had clearly reoriented some of their hacking tools and infrastructure from destructive capabilities to financially lucrative and destabilizing ones. The same country that had launched denial- of-service attacks against the US in 2009, wiped computers across major South Korean firms in 2013, and hit Sony in 2014 was now in the business of hacking financial institutions. The most isolated and sanctioned regime on the planet, as it continued to pour money into acquiring illicit nuclear weapons, was funding itself in part through hacking. It was yet another way in which statecraft and cyberoperations had intersected. Far more was to come.
The North Korean hackers had clearly mastered several key hacking tasks that once would have been far beyond them. They could get deep access to banks’ computer networks in countries all over the world by deploying malicious code, conducting extensive reconnaissance, and remaining largely undetected. They had also developed an exceptional understanding of the SWIFT system and how banks connected to it, updating their tactics and tools to keep pace with the urgent security upgrades SWIFT and financial institutions kept rolling out.
But they had a problem: In too many cases, they issued a fraudulent transaction without being able to actually get the pilfered funds. Banks had sometimes thwarted the theft operations in their final withdrawal stages. The North Koreans needed a better way to cash out.
In the summer of 2018, the hackers tried a new tactic. The operation began with the compromise of Cosmos Cooperative Bank in India sometime around June. Once inside Cosmos, they developed a thorough understanding of how the bank functioned and gained secret access to significant parts of its computing infrastructure. Throughout the summer of 2018, they seemed to be preparing for a new kind of operation. This time, they would use ATM cards as well as electronic funds transfers to get the money out.
The premise of an ATM cash-out is quite straightforward and predates the North Koreans’ operations: Hackers gain access to the credentials of a bank’s customer, and then a money mule shows up to an ATM and withdraws money from that account. With no bank teller to talk to or physical branch to enter, the chance of arrest is substantially lower. Previous ATM cash-outs by different criminal hackers had worked at a small scale, including against the National Bank of Blacksburg in Virginia. The challenge was getting the target’s card and PIN to dupe the ATM into disbursing the money.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But before the North Koreans could act, US intelligence agencies caught a whiff that something was amiss. While it seems the US government did not know specifically which financial institution the North Koreans had compromised, the FBI issued a private message to banks on August 10. In it, the bureau warned of an imminent ATM cash-out scheme due to a breach at small- to medium-size banks. The breach fit into a pattern of what investigators often called “unlimited operations” because of the potential for many withdrawals. The FBI urged banks to be vigilant and to upgrade their security practices.
It did not matter. On August 11, the North Koreans made their move. In a window that lasted only a little over two hours, money mules in 28 countries sprang into action. Operating with cloned ATM cards that worked just like real ones, they withdrew money from machines all over the world in amounts ranging from $100 to $2,500. Whereas previous North Korean attempts had failed because large bank transfers were hard to miss and easy to reverse, this effort was designed to be broad, flexible, and fast. The total take was around $11 million.
One question immediately surfaced: How did the North Koreans manage this? For each withdrawal, they would have had to trick Cosmos Bank’s authentication system into permitting the disbursal of money at the ATM. Even if they had some information for each customer’s account, it is exceptionally unlikely that they had managed to get the PINs of so many individuals. Without those numbers, every attempt at authenticating the withdrawal requests should have failed.
Saher Naumaan and other researchers at BAE Systems’ offered a theory that fits available evidence quite well. They surmised that the North Korean compromise of the Cosmos computer infrastructure might have been so thorough that the hackers were able to manipulate the fraudulent authentication requests themselves. As a result, when each withdrawal request made its way through the international banking system to Cosmos Bank, it was likely misdirected to a separate authentication system set up by the hackers. This system would approve the request and bypass any fraud-detection mechanisms Cosmos had in place. A senior police official in India later confirmed this supposition to the Times of India.
Once the cash-out was successful, the hackers also went back to Plan A: Two days later, they initiated three more transfers using the SWIFT system from Cosmos Bank to an obscure company in Hong Kong, netting around another $2 million. The firm, ALM Trading Limited, had been created and registered with the government just a few months before. Its nondescript name and apparent lack of web presence makes it exceptionally difficult to learn more about it or about the fate of the money transferred to it, though it seems likely that the North Koreans collected the cash.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Given that the Cosmos operation raised questions about authentication and trust in financial transactions, it shows how the North Koreans’ tactics of theft, ransom, and financial-record manipulation can have impacts that go beyond just the acquisition of funds for the regime. Future operations may try to exploit this potential for destabilization more directly, perhaps by flooding the SWIFT system with fraudulent transactions to cause still-greater doubts about its integrity.
There is no reason to think that the North Korean financial campaign will stop. For years, its operational hallmark has been code that continually evolves and improves. What the North Koreans lack in skill, at least when compared with their counterparts at the NSA, they partially make up for in aggressiveness and ambition. They seem mostly uninhibited by worries of blowback and appear to welcome the consequences of disrupting thousands of computers or modifying vitally important financial records. In gaining much-needed cash, they slowly reshape and advance their position geopolitically. They incur setbacks, to be sure, but over time their hackers have garnered vast sums for the regime while threatening the perceived integrity of global financial systems. The days of supernotes are gone, but North Korea has brought together fraud and destabilization once again.
Excerpted from THE HACKER AND THE STATE: CYBER ATTACKS AND THE NEW NORMAL OF GEOPOLITICS by Ben Buchanan, published by Harvard University Press When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.
Signal is finally bringing its secure messaging to the masses The princess, the plantfluencers, and the pink congo scam Mark Warner takes on Big Tech and Russian spies How a space engineer made her own rotary cell phone Meet the sulfur miners risking their lives inside a volcano 👁 The secret history of facial recognition.
Plus, the latest news on AI 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Topics longreads hacking cybercrime north korea Andy Greenberg Brandi Collins-Dexter Angela Watercutter Steven Levy Lauren Smiley Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
195 | 2,023 | "Why Scientists Are Bugging the Rainforest | WIRED" | "https://www.wired.com/story/why-scientists-are-bugging-the-rainforest" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Why Scientists Are Bugging the Rainforest Just by spying on sounds, researchers can detect the vocalizing creatures of a rainforest, like this purple-chested hummingbird.
Photograph: Martin Schaefer Save this story Save Save this story Save There’s much, much more to the rainforest than meets the eye. Even a highly trained observer can struggle to pick out individual animals in the tangle of plant life—animals that are often specifically adapted to hide from their enemies. Listen to the music of the forest, though, and you can get a decent idea of the species by their chirps, croaks, and grunts.
This is why scientists are increasingly bugging rainforests with microphones—a burgeoning field known as bioacoustics—and using AI to automatically parse sounds to identify species. Writing today in the journal Nature Communications , researchers describe a proof-of-concept project in the lowland Chocó region of Ecuador that shows the potential power of bioacoustics in conserving forests.
“Biodiversity monitoring has always been an expensive and difficult endeavor,” says entomologist and ecologist David Donoso of Ecuador’s National Polytechnic School, a coauthor of the paper. “The problem only worsens when you consider that good monitoring programs require many years of data to fully understand the dynamics of the system, and how specific problems affect these dynamics.” The researchers picked over 40 sites across different landscape types, including active agricultural lands, plantations that had been abandoned for decades ( and are recovering ecologically ), and intact, old-growth forest. Below, you can see the instruments they deployed. At left is a microphone that recorded sound for two minutes every 15 minutes, so it didn’t drain its battery as quickly as recording 24/7. At right is a light trap for catching insects.
Sound recorder and automatic light trap for recording voices and night insects.
Photograph: Annika Busse Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Once the team had these recordings, they tapped experts to identify birds and amphibians by their vocalizations, and used DNA from the light traps to identify nocturnal insects. They also used AI to identify the bird species by sound.
“We can say the scientific part is basically solved, so the AI models work,” says conservation ecologist Jörg Müller of the University of Würzburg in Germany, lead author of the paper. “It’s fine-scale, high-quality. And the nice thing is that you can store the data.” Several years of recordings will track how the forest ecosystem evolves over time, with species populations waxing or waning as new arrivals colonize the terrain, or as climate change affects which struggle or thrive in hotter, drier conditions.
In particular, scientists and conservationists are interested in learning about the composition of species that return to disturbed environments. In Ecuador, the agricultural land tends to attract birds from southern parts of South America with their natural open areas, which are similar to the Pampas grasslands. “So it could be that you have the same number of species in agriculture and all those forests, but totally different ones,” says Müller. “These habitats are not empty—they are full of birds—but not the original fauna from primeval forests.” This map shows the many sampling locations in Ecuador.
Illustration: Constance Tremlett Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Researchers are also trying to track animals that are responding to a complex set of overlapping environmental stressors. Forest health used to primarily be a problem of deforestation. Now it is a far more complicated set of problems stemming from global climate change and land use. The Amazon, for instance, is threatened by both loggers and severe droughts.
One of the challenges of field observation is that it requires humans, who are very big mammals, to go traipsing through the forest, altering its normal bustle. But a microphone simply listens, a camera trap quietly watches for movement and snaps a picture, and a light trap silently attracts insects.
The study’s recordings picked up the purple-chested hummingbird, shown at top, and the extremely rare banded ground cuckoo, shown below. “This is the holy grail for ornithologists. Some ornithologists go to Ecuador for 30 years to see the bird and never see them,” says Müller. “And we report it with sound recorders and with camera traps. So it shows another advantage from these recorders: They do not disturb.” The Banded Ground Cocoo (Neomorphus radiolosus, left) is among the birds recorded in tropical reforestation plots in Ecuador.
Photograph: John Rogers Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Bioacoustics can’t fully replace ecology fieldwork, but can provide reams of data that would be extremely expensive to collect by merely sending scientists to remote areas for long stretches of time. With bioacoustic instruments, researchers must return to collect the data and swap batteries, but otherwise the technology can work uninterrupted for years. “Scaling sampling from 10, 100, [or] 1,000 sound recorders is much easier than training 10, 100, 1,000 people to go to a forest at the same time,” says Donoso.
“The need for this kind of rigorous assessment is enormous. It will never be cost-effective to have a kind of boots-on-the-ground approach,” agrees Eddie Game, the Nature Conservancy’s lead scientist and director of conservation for the Asia Pacific region, who wasn’t involved in the new research. “Even in relatively well-studied places it would be difficult, but certainly, in a tropical forest environment where that diversity of species is so extraordinary, it’s really difficult.” A limitation, of course, is that while birds, insects, and frogs make a whole lot of noise, many species do not vocalize. A microphone would struggle to pick up the presence of a butterfly or a snake.
But no one’s suggesting that bioacoustics alone can quantify the biodiversity of a forest. As with the current experiment, bioacoustics work will be combined with the use of cameras, field researchers, and DNA collection. While this team harvested DNA directly from insects caught in light traps, others may collect environmental DNA, or eDNA, that animals leave behind in soil, air , and water.
In June, for instance, a separate team showed how they used the filters at air quality stations to identify DNA that had been carried by the wind. In the future, ecologists might be able to sample forest soils to get an idea of what animals moved through the area. But while bioacoustics can continuously monitor for species, and eDNA can record clues about which ones crossed certain turf, only an ecologist can observe how those species might be interacting—who’s hunting who, for instance, or what kind of bird might be outcompeting another.
The bioacoustics data from the new study suggests that Ecuador’s forests can recover beautifully after small-scale pastures and cacao plantations are abandoned. For instance, the researchers found the banded ground cuckoo already in 30-year-old recovery forests. “Even our professional collaborators were surprised at how well the recovery forests were colonized by so-called old-growth species,” says Müller. “In comparison to Europe, they do it very quickly. So after, let's say, 40, 50 years, it's not fully an old-growth forest. But most of these very rare species can make use of this as a habitat, and thereby expand their population.” This technology will also be helpful for monitoring forest recovery—to confirm, for example, that governments are actually restoring the areas they say they are. Satellite images can show that new trees have been planted, but they’re not proof of a healthy ecosystem or of biodiversity. “I think any ecologist would tell you that trees don't make a forest ecosystem,” says Game. The cacophony of birds and insects and frogs—a thriving, complex mix of rainforest species—do.
“I think we're just going to keep on learning so much more about what sound can tell us about the environment,” says Game, who compares bioacoustics to NASA’s Landsat program , which opened up satellite imagery to the scientific community and led to key research on climate change and wildfire damage. “It was radically transformational in the way we looked at the Earth. Sound has some similar potential to that,” he says.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Staff Writer X Topics Ecology Biology conservation Matt Simon Arbab Ali Matt Simon Celia Ford Matt Simon Phoebe Weston Emily Mullin Kate Yoder Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
196 | 2,023 | "Why Rain Is Getting Fiercer on a Warming Planet | WIRED" | "https://www.wired.com/story/why-rain-is-getting-fiercer-on-a-warming-planet" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Why Rain Is Getting Fiercer on a Warming Planet Photograph: ANTHONY WALLACE/Getty Images Save this story Save Save this story Save One of the weirder side effects of climate change is what it’s doing to rainfall. While most people think about global warming in terms of extreme heat—the deadliest kind of natural disaster in the United States—there is also an increasing risk of extreme precipitation. On average, it will rain more on Earth, and individual storms will get more intense.
Intuitively, it doesn’t make much sense. But the physics is clear—and highly consequential, given how destructive and deadly floods already were before climate change.
Think of rain like Earth’s sweat. When your body sweats and the moisture evaporates off your skin, it carries heat away with it. Likewise, water evaporating off land and oceans carries heat away from those surfaces. (This cooling does about half the total job of dispersing heat from the planet’s surface, keeping it in balance with incoming sunlight.) After moisture rises, it condenses and falls as rain.
Greenhouse gases in the atmosphere are like a blanket that’s making it harder for Earth to shed heat into space. The more greenhouse gases it contains, the “thicker” this blanket becomes. In response, Earth uses more evaporative cooling—just as you’d sweat more under a down comforter than a cotton sheet.
“It's a basic energy balance issue,” says Liz Moyer, an atmospheric scientist at the University of Chicago who studies the influence of climate change on precipitation. “The very physics that gives us the greenhouse effect also makes the planet shed more of this energy by evaporation. And because whatever goes up must come down, that means we also get more rain.” Atmospheric scientists rely on the Clausius–Clapeyron equation, which says that for every 1 degree Celsius of warming, air can hold 6 to 7 percent more water. If nothing else changes, you'd expect the same increase in the amount of rainfall from a given storm.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg However, Moyer cautions, “the fact that a warmer atmosphere holds more moisture doesn't tell you how the average rainfall will increase. That change is set by different physics. You could even imagine an atmosphere that holds more moisture but has no increase in average rainfall. In that case you'd have more intense storms, but it would rain less often.” In other words, more moisture might just result in more humidity without rain.
It’s historically been a challenge for scientists to disentangle the natural variability of rains and the influence of climate change, says climate scientist Yoo-Geun Ham, of Chonnam National University in South Korea (a country that’s been grappling with flooding ). Rainfall is by its nature a highly complex and variable phenomenon: One year might naturally be wetter or drier than the next, independent of climate change. “Precipitation has very high natural variability compared to other meteorological variables,” says Ham. “Precipitation itself is a very challenging variable to detect global warming signals.” So in a recent study , Ham and his colleagues used a deep learning model to parse precipitation data, teasing out the signal of climate change in recent decades. “We are having many cases of the heavier rainfall events, in particular this year in East Asia and the Eastern US ,” says Ham. “We can conclude that that kind of increased occurrence of heavy rainfall events is due to global warming.” The West Coast of the US, too, is going to get soaked. Here, the “atmospheric river” storms that tear through are feeding on moisture as they move across the Pacific. “When you heat the ocean surface by a degree or something like that, you actually increase the amount of water that is coming into California through these atmospheric rivers,” says Rao Kotamarthi, senior scientist at Argonne National Laboratory who studies precipitation and climate change. “You will feel the impact of that by additional intense rains in California.” Extreme rain gets especially dangerous when water dumps quickly. The landscape simply doesn’t have time to absorb the deluge, leading to flash flooding. If one storm follows another, the soil might already be too wet to accept any more water.
This sort of hazard is increasingly perilous in areas where snow is common, like high elevations. Earlier this year, one study found extreme precipitation is increasing by 15 percent for every 1 degree C of warming in mountainous regions and high latitudes. That’s more than double what the Clausius–Clapeyron equation suggests.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “When we talk about extreme precipitation—and we look at the impact it has in terms of severe flooding and damage to infrastructure—it really matters whether precipitation is falling as rain or snow,” says Mohammed Ombadi, a climate scientist at the University of Michigan and lead author of the paper. “What we see is that global warming is not only increasing precipitation due to having more water vapor in the atmosphere, but a higher proportion of this extreme precipitation is falling as rain instead of snow.” Hazards multiply when there’s more rain and less snow. Snow accumulates slowly and can take months to fully melt. Downpours release all that water at once. In mountainous regions, rain can trigger landslides, too, like the ones that ravaged the Himalayas in August. “Based on some preliminary data that people collected,” says Ombadi, “it seems like having a higher proportion of precipitation falling as rain instead of snow was really a key factor leading to what happened last month.” Current infrastructure simply isn’t built for these ever-bigger deluges, and that will put lives at risk. Generally speaking, urban planners have designed city drainage systems to whisk away rainwater as quickly as possible to avoid flooding. But as rainfall gets heavier, canals and sewers can’t get the water out fast enough.
So the focus is shifting to making cities “spongier,” with fewer impermeable surfaces where water can accumulate, like concrete, and more green spaces so water can seep into underlying aquifers for later use. “We definitely need to change the way we design new infrastructure to be consistent with the change that global warming is bringing,” says Ombadi, “and what will happen 10 years, 20 years, and 30 years from now.” You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Staff Writer X Topics climate change extreme weather Chris Stokel-Walker Ramin Skibba Ramin Skibba Ramin Skibba Rhett Allain Matt Simon Matt Simon Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
197 | 2,020 | "Why Massive Saharan Dust Plumes Are Blowing Into the US | WIRED" | "https://www.wired.com/story/saharan-dust-plumes-are-blowing-into-the-us" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Why Massive Saharan Dust Plumes Are Blowing Into the US Photograph: RICARDO ARDUENGO/Getty Images Save this story Save Save this story Save The pandemic is still raging , the Arctic is burning up , and microplastics are polluting every corner of the Earth , but do try to take a deep breath. Actually, belay that, especially if you live in the southern United States. A plume of dust thousands of miles long has blown from the Sahara across the Atlantic, suffocating Puerto Rico in a haze before continuing across the Gulf of Mexico. Yesterday, it arrived in Texas and Louisiana.
It’s normal for Saharan dust to blow into the Americas—in fact, the phosphorus it carries is a reliable fertilizer of the Amazon rainforest.
The dust makes the journey year after year, starting around mid-June and tapering off around mid-August. The good news is, the dust plumes can deflate newly forming hurricanes they might encounter on the way over. But the bad news is that dust is a respiratory irritant, and we could use fewer of those during the Covid-19 pandemic. Also, the current plume is particularly dense, and it’s not alone: The African desert is now releasing another that’s working its way across the Atlantic and will arrive in a few days. Still more could be on the way as the summer goes on.
A satellite captures a dust plume leaving Africa on June 19.
Video: CSU/CIRA/NOAA En route to the continental US, the plume struck Puerto Rico on Saturday, cutting visibility down to 3 miles. It’s the worst Saharan dust event the island has seen in 15, maybe 20 years, says Olga Mayol-Bracero, an atmospheric chemist at the University of Puerto Rico. Her air-analyzing instruments were working in real time, detecting the component elements of the desert dust. “We were quite surprised, seeing such high values for all these different parameters—we had never seen that,” Mayol-Bracero says. “So it was quite shocking.” How does Saharan dust make it all the way across an ocean? It’s a lesson in atmospheric science.
Because it’s a desert, the Sahara is loaded with particulate matter, from coarse sand down to the tiniest of dirt specks, none of which is very well anchored to the ground. By contrast, the lush rainforests to the south of the Sahara have trees that both block the wind and hold on to the soil with their roots, keeping all the muck from taking to the air. The conflict between these two atmospheric regions is what births the plumes that blow clear across the Atlantic.
The dust plume arrived in the Caribbean a few days after it left Africa.
Video: CSU/CIRA/NOAA The Sahara is notoriously dry and hot. But down south, around the Gulf of Guinea, it’s much cooler and wetter, on account of its proximity to the equator. “The setup between those two—the hot to the north and the cool, moist to the south—sets up a wind circulation that can become very strong, and it can actually scour the surface of the desert,” says Steven Miller, deputy director of the Cooperative Institute for Research in the Atmosphere at Colorado State University, which is monitoring the plumes. (You can watch the dust’s progress from a satellite with this neat tool.
Look for the gray clouds on the map.) At the same time, a mile above the desert a 2-mile-thick mass of hot, dry air called the Saharan Air Layer , or SAL, has formed. This happens reliably every summer, blowing east toward the Americas. The process creates “pulses” of warm, dry, dusty air traveling along the SAL that cycle every three to five days, says Miller. So if you take a look at the GIF below, you can see the first plume that’s reached the southern US, and the new plume currently kicking off from the Sahara. Each plume takes about three days to cross the ocean.
Here you can see one plume moving through the Caribbean, while another leaves Africa.
Video: CSU/CIRA/NOAA Looking at these images, you might notice that the plumes are traveling suspiciously like hurricanes do across the Atlantic —and, indeed, this is where things get extra interesting. The SAL is about 50 percent drier than the surrounding air, and 5 to 10 degrees Celsius hotter, and it’s unloading plume after plume. “When that kicks into high gear, and you've got these pulses after pulses of really strong Saharan air, that's what kind of inhibits the tropical storm formation, which forms in these easterly winds as well,” says Miller. In other words, these dust plumes actually counteract the generation of hurricanes.
That’s also because of the contrast between wetter air and drier air. Tropical storms derive their energy from wet air. “When you get dry air mixing in, it can weaken the storm, and it creates these downdrafts and inhibits the convection that starts to get organized to create hurricanes,” Miller says.
Think of this convection like boiling a pot of water. At the bottom of the pot, the water gets much hotter than the water at the surface, which is in contact with the air. This contrast creates convection—boil some rice and you’ll notice that the grains cycle between the top and the bottom of the pot. “But if you have the opposite situation set up, where you have the warm water above cool water, then it's what we call a stable situation—there's no mixing that happens,” says Miller. Warm air, after all, wants to rise, and cold air wants to sink. “When you have the Saharan Air Layer moving across, it's kind of like that. You've got this warmer air moving across the Atlantic Ocean, which is a cooler ocean surface. You have this cool air underneath warm air, and then the atmosphere in that case is very stable.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg It doesn't help matters for any budding hurricanes that the dust in the SAL is absorbing heat from the sun as it travels across the Atlantic, creating still more atmospheric stability. Even worse for hurricanes, they need a calm environment in order to start spinning, but the SAL is barrelling in with 50-mile-per-hour winds. “It tilts and it bends the tropical cyclone vortex as you go up in height, and it decouples and disrupts the storm's internal ‘heat engine,’ as we call it,” says Miller. “What the storm wants is just a nice vertically aligned vortex so it can transfer heat and moisture from the surface upward and out.” Forecast models can predict where the dust might land in the Americas, just like scientists would do with an approaching hurricane.
Miller reckons that the plume currently working through the southern US could eventually make it to him in Colorado, albeit in a diminished form. That’s because of gravity: As the plume makes its way across the Atlantic, the larger particles fall out first, leaving the smaller particles to make landfall.
Air sampling stations throughout the US gather this particulate material for scientists to study. “What we typically see is that the concentrations are highest in the southeast, more towards Florida,” says Jenny Hand, senior research scientist at the Cooperative Institute for Research in the Atmosphere. “And as it moves farther north, the concentrations will go down, just as it sort of settles out, diffuses, and gets moved around. But we do see those impacts up into the Ohio River Valley pretty regularly in our data.” So what does that mean for respiratory health, especially with Covid-19 being a respiratory disease? “Yeah, it's not good,” says Hand. “Especially now.” When you inhale dust, it travels deep into your lungs, triggering an inflammatory immune response. If your lungs are healthy, maybe this will manifest as a mild cough. “But for others who have chronic inflammatory lung conditions, such as asthma or emphysema, this extra burden of inflammation can tip them over into severe breathing trouble,” says W. Graham Carlos of the Indiana University School of Medicine and Eskenazi Health. “We know, for example, that in many parts of the world that are afflicted with sand and dust storm events, such as the Middle East, we see more asthma and asthma attacks.” He advises that people with respiratory conditions stay indoors until the plume passes. If you have to go outside, he says, wear an N95 mask: “That type of mask filters those fine particles, fine enough to travel in the air across the Atlantic Ocean.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Carlos adds that researchers can’t yet say whether inhaling the Saharan dust might predispose people to contracting Covid-19 or make the illness worse. “I would caution, though, that Covid is also an inflammatory condition in the lungs, and that's in fact why people are needing ventilators and hospitals are surging,” he says. “So this could add insult to injury. In other words, you might have a low-grade inflammatory condition from the dust plume, and then if you were to get Covid on top of that, it may be worse.” As the weather cools in Africa starting in mid-August, that temperature differential between the desert and the forests to the south will weaken, zapping the SAL conveyor belt. The dust clouds will stop rolling across the Atlantic. Then we can all go back to just worrying about Covid-19 and microplastics and a melting Arctic.
We can protect the economy from pandemics.
Why didn't we ? Retro hackers are building a better Nintendo Game Boy The country is reopening.
I’m still on lockdown How to clean up your old social media posts Walmart employees are out to show its anti-theft AI doesn't work 👁 Is the brain a useful model for AI ? Plus: Get the latest AI news 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Staff Writer X Topics atmosphere sand Matt Simon Matt Simon Matt Simon Matt Simon Sushmita Pathak Robin Andrews Maryn McKenna Arbab Ali Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
198 | 2,023 | "How Bad Is the Smoke in the Midwest? Check Out This Map | WIRED" | "https://www.wired.com/story/how-bad-is-the-smoke-in-the-midwest-check-out-this-map" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science How Bad Is the Smoke in the Midwest? Check Out This Map Photograph: KAMIL KRZACZYNSKI/Getty Images Save this story Save Save this story Save Right now, Detroit, Chicago, and Minneapolis have the unhealthiest air in the world, save for Dubai. Canadian wildfires are spewing smoke that’s wafting south, blanketing the Midwest in a toxic haze, just as they did earlier this month along the East Coast.
Seventeen states—with nearly a third of the US population—are under air quality alerts.
Video: NOAA The animation above gives you an idea of the scale and severity of what’s unfolding. This is from an experimental model called HRRR-Smoke (High-Resolution Rapid Refresh), produced by the National Oceanic and Atmospheric Administration (NOAA), and it has become a critical tool for meteorologists and atmospheric scientists.
(You can play with the map here.
) It’s a forecast for how the smoke might move today, showing how it is swirling across not only the Midwest, but the East Coast once again, and even the South. The model predicts that smoke may continue to waft as far south as Georgia through the end of the day. (This map forecasts hours ahead, not days.) The hotter the color, the higher the concentration of smoke in the air.
Specifically, this animation shows “Near Surface Smoke,” or concentrations about 26 feet off the ground. That’s the stuff Midwesterners have to worry about breathing.
Wildfire smoke is a cocktail of really nasty stuff , including charred particulate matter, such as plants and dirt, that can get deep into lungs, irritating airways. It’s also loaded with toxic chemicals, like benzene and formaldehyde, and can even develop new nasties as it travels through the atmosphere, like ozone. People with asthma are particularly vulnerable to this toxic gas, which inflames the airways.
Interestingly enough, the HRRR model isn’t based on a direct measurement of the smoke. Instead, it uses infrared satellite data, which pinpoints wildfires and estimates their severity. It then employs weather models, which factor in temperature and wind, to forecast where the resulting smoke is headed.
Video: NOAA The animation above shows a different measurement: “Vertically Integrated Smoke.” This models a column of air 15.5 miles high. It is the smoke you can see in the sky, as opposed to the smoke that’s a health hazard at ground level.
While the smoke is a public health emergency for people in the Midwest and on the East Coast, it’s also a scientific opportunity. Researchers can use HRRR to model where smoke is going, then use measurements during an event like this to improve that modeling. “From a scientific point of view, we think we’re seeing the HRRR smoke model doing the right thing,” says Stan Benjamin, senior weather modeling scientist at NOAA Global Systems Laboratory and branch leader for development of HRRR. “We do have people in our lab that are working on actually using the measurements of smoke at the surface, and also through satellite images, to refine the initial conditions for the HRRR model.” The National Weather Service is forecasting that smoky conditions will continue through tomorrow—but the source of all that smoke shows no signs of letting up. Canada is suffering an unprecedented wildfire season , and climate change’s fingerprints are all over it. The hotter the world gets, the easier it is for the atmosphere to suck moisture out of vegetation, turning vast landscapes into tinder.
All it takes is a discarded cigarette butt or a lightning strike—which are growing increasingly common in the north —to ignite a blaze that burns out of control.
All of this is all to say, keep the HRRR map handy. Wildfire smoke isn’t just a problem for western states anymore, but for the whole of North America.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Staff Writer X Topics wildfires climate change health fire smoke climate Matt Simon and Amanda Hoover Matt Simon Matt Simon Sushmita Pathak Matt Simon Arbab Ali Maryn McKenna Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
199 | 2,023 | "The Snow Crab Vanishes | WIRED" | "https://www.wired.com/story/the-snow-crab-vanishes" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Julia O’Malley Science The Snow Crab Vanishes Photograph: Víctor Suárez/Alamy Save this story Save Save this story Save This story originally appeared on Grist and is part of the Climate Desk collaboration.
My small turboprop plane whirred low through thick clouds. Below me, St. Paul Island cut a golden, angular shape in the shadow-dark Bering Sea. I saw a lone island village—a grid of houses, a small harbor, and a road that followed a black ribbon of coast.
Some 330 people, most of them Indigenous, live in the village of St. Paul, about 800 miles west of Anchorage, where the local economy depends almost entirely on the commercial snow crab business. Over the past few years, 10 billion snow crabs have unexpectedly vanished from the Bering Sea. I was traveling there to find out what the villagers might do next.
The arc of St. Paul’s recent story has become a familiar one—so familiar, in fact, that I couldn’t blame you if you missed it. Alaska news is full of climate elegies now—every one linked to wrenching changes caused by burning fossil fuels. I grew up in Alaska, as my parents did before me, and I’ve been writing about the state’s culture for more than 20 years. Some Alaskans’ connections go far deeper than mine. Alaska Native people have inhabited this place for more than 10,000 years.
As I’ve reported in Indigenous communities, people remind me that my sense of history is short and that the natural world moves in cycles. People in Alaska have always had to adapt.
Even so, in the past few years I’ve seen disruptions to economies and food systems, as well as fires, floods, landslides, storms, coastal erosion, and changes to river ice—all escalating at a pace that’s hard to process. Increasingly, my stories veer from science and economics into the fundamental ability of Alaskans to keep living in rural places.
You can’t separate how people understand themselves in Alaska from the landscape and animals. The idea of abandoning long-occupied places echoes deep into identity and history. I’m convinced the questions Alaskans are grappling with—whether to stay in a place and what to hold onto if they can’t—will eventually face everyone.
I’ve given thought to solastalgia—the longing and grief experienced by people whose feeling of home is disrupted by negative changes in the environment. But the concept doesn’t quite capture what it feels like to live here now.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A few years ago, I was a public radio editor on a story out of the small Southeast Alaska town of Haines about a storm that came through carrying a record amount of rain. The morning started routinely—a reporter on the ground calling around, surveying the damage. But then, a hillside rumbled down, taking out a house and killing the people inside. I still think of it—people going through regular routines in a place that feels like home, but that, at any time, might come cratering down. There’s a prickly anxiety humming beneath Alaska life now, like a wildfire that travels for miles in the loamy surface of soft ground before erupting without notice into flames.
But in St. Paul, there was no wildfire—only fat raindrops on my windshield as I loaded into a truck at the airport. In my notebook, tucked into my backpack, I’d written a single question: “What does this place preserve?” The sandy road from the airport in late March led across wide, empty grassland, bleached sepia by the winter season. Town appeared beyond a rise, framed by towers of rusty crab pots. It stretched across a saddle of land, with rows of brightly painted houses—magentas, yellows, teals—stacked on either hillside. The grocery store, school, and clinic sat in between them, with a 100-year-old Russian Orthodox church named for Saints Peter and Paul, patrons of the day in June 1786 when Russian explorer Gavril Pribylov landed on the island. A darkened processing plant, the largest in the world for snow crabs, rose above the quiet harbor.
You’re probably familiar with sweet, briny snow crab— Chionoecetes opilio —which is commonly found on the menus of chain restaurants like Red Lobster. A plate of crimson legs with drawn butter there will cost you $32.99. In a regular year, a good portion of the snow crab America eats comes from the plant, owned by the multibillion-dollar company Trident Seafoods.
Not that long ago, at the peak of crab season in late winter, temporary workers at the plant would double the population of the town, butchering, cooking, freezing, and boxing 100,000 pounds of snow crab per day, along with processing halibut from a small fleet of local fishers. Boats full of crab rode into the harbor at all hours, sometimes motoring through swells so perilous they’ve become the subject of a popular collection of YouTube videos.
People filled the town’s lone tavern in the evenings, and the plant cafeteria, the only restaurant in town, opened to locals. In a normal year, taxes on crab and local investments in crab fishing could bring St. Paul more than $2 million.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Then came the massive, unexpected drop in the crab population—a crash scientists linked to record-warm ocean temperatures and less ice formation, both associated with climate change. In 2021, federal authorities severely limited the allowable catch. In 2022, they closed the fishery for the first time in 50 years. Industry losses in the Bering Sea crab fishery climbed into the hundreds of millions of dollars. St. Paul lost almost 60 percent of its tax revenue overnight. Leaders declared a “cultural, social, and economic emergency.” Town officials had reserves to keep the community’s most basic functions running, but they had to start an online fundraiser to pay for emergency medical services.
Through the windshield of the truck I was riding in, I could see the only cemetery on the hillside, with weathered rows of Orthodox crosses. Van Halen played on the only radio station. I kept thinking about the meaning of a cultural emergency.
Some of Alaska’s Indigenous villages have been occupied for thousands of years, but modern rural life can be hard to sustain because of the high costs of groceries and fuel shipped from outside, limited housing, and scarce jobs. St. Paul’s population was already shrinking ahead of the crab crash. Young people departed for educational and job opportunities. Older people left to be closer to medical care. St. George, its sister island, lost its school years ago and now has about 40 residents.
If you layer climate-related disruptions—such as changing weather patterns, rising sea levels, and shrinking populations of fish and game—on top of economic troubles, it just increases the pressure to migrate.
When people leave, precious intangibles vanish as well: a language spoken for 10,000 years, the taste for seal oil, the method for weaving yellow grass into a tiny basket, words to hymns sung in Unangam Tunuu, and maybe most importantly, the collective memory of all that had happened before. St. Paul played a pivotal role in Alaska’s history. It’s also the site of several dark chapters in America’s treatment of Indigenous populations. But as people and their memories disappear, what remains? There is so much to remember.
The Pribilofs consist of five volcano-made islands—but people now live mainly on St. Paul. The island is rolling, treeless, with black sand beaches and towering basaltic cliffs that drop into a crashing sea. In the summer it grows verdant with mosses, ferns, grasses, dense shrubs, and delicate wildflowers. Millions of migratory seabirds arrive every year, making it a tourist attraction for birders that’s been called the “Galapagos of the North.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Driving the road west along the coast, you might glimpse a few members of the island’s half-century-old domestic reindeer herd. The road gains elevation until you reach a trailhead. From there you can walk the soft fox path for miles along the top of the cliffs, seabirds gliding above you—many species of gulls, puffins, common murres with their white bellies and obsidian wings. In spring, before the island greens up, you can find the old ropes people use to climb down to harvest murre eggs. Foxes trail you. Sometimes you can hear them barking over the sound of the surf.
Two-thirds of the world’s population of northern fur seals—hundreds of thousands of animals—return to beaches in the Pribilofs every summer to breed. Valued for their dense, soft fur, they were once hunted to near extinction.
Alaska’s history since contact is a thousand stories of outsiders overwriting Indigenous culture and taking things—land, trees, oil, animals, minerals—of which there is a limited supply. St. Paul is perhaps among the oldest examples. The Unangax̂—sometimes called Aleuts—had lived on a chain of Aleutian Islands to the south for thousands of years and were among the first Indigenous people to see outsiders—Russian explorers who arrived in the mid-1700s. Within 50 years, the population was nearly wiped out. People of Unangax̂ descent are now scattered across Alaska and the world. Just 1,700 live in the Aleutian region.
St. Paul is home to one of the largest Unangax̂ communities left. Many residents are related to Indigenous people kidnapped from the Aleutian Islands and forced by Russians to hunt seals as part of a lucrative 19th-century fur trade. St. Paul’s robust fur operation, subsidized by slave labor, became a strong incentive for the United States’ purchase of the Alaska territory from Russia in 1867.
On the plane ride in, I read the 2022 book that detailed the history of piracy in the early seal trade on the island, Roar of the Sea: Treachery, Obsession, and Alaska’s Most Valuable Wildlife by Deb Vanasse. One of the facts that stayed with me: Profits from Indigenous sealing allowed the US to recoup the $7.2 million it paid for Alaska by 1905. Another: After the purchase, the US government controlled islanders well into the mid-20th century as part of an operation many describe as indentured servitude.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The government was obligated to provide for housing, sanitation, food, and heat on the island, but none were adequate. Considered “wards of the state,” the Unangax̂ were compensated for their labors in meager rations of canned food. Once a week, Indigenous islanders were allowed to hunt or fish for subsistence. Houses were inspected for cleanliness and to check for home brew. Travel on and off the island was strictly controlled. Mail was censored.
Between 1870 and 1946, Alaska Native people on the islands earned an estimated $2.1 million, while the government and private companies raked in $46 million in profits. Some inequitable practices continued well into the 1960s, when politicians, activists, and the Tundra Times , an Alaska Native newspaper , brought the story of the government’s treatment of Indigenous islanders to a wider world.
During World War II, the Japanese bombed Dutch Harbor and the US military gathered St. Paul residents with little notice and transported them 1,200 miles to a detention camp at a decrepit cannery in Southeast Alaska at Funter Bay. Soldiers ransacked their homes on St. Paul and slaughtered the reindeer herd so there would be nothing for the Japanese if they occupied the island. The government said the relocation and detention were for protection, but they brought the Unangax̂ back to the island during the seal season to hunt. A number of villagers died in cramped and filthy conditions with little food. But Unangax̂ also became acquainted with Tlingits from the Southeast region, who had been organizing politically for years through the Alaska Native Brotherhood/Sisterhood organization.
After the war, the Unangax̂ people returned to the island and began to organize and agitate for better conditions. In one famous suit, known as “the corned beef case,” Indigenous residents working in the seal industry filed a complaint with the government in 1951. According to the complaint, their compensation, paid in the form of rations, included corned beef, while white workers on the island received fresh meat. After decades of hurdles, the case was settled in favor of the Alaska Native community for more than $8 million.
“The government was obligated to provide ‘comfort,’ but ‘wretchedness’ and ‘anguish’ are the words that more accurately describe the condition of the Pribilof Aleuts,” read the settlement, awarded by the Indian Claims Commission in 1979. The commission was established by Congress in the 1940s to weigh unresolved tribal claims.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Prosperity and independence finally came to St. Paul after commercial sealing was halted in 1984. The government brought in fishermen to teach locals how to fish commercially for halibut and funded the construction of a harbor for crab processing. By the early ’90s, crab catches were enormous, reaching between 200 and 300 million pounds per year. (By comparison, the allowable catch in 2021, the first year of marked crab decline, was 5.5 million pounds, though fishermen couldn’t catch even that.) The island’s population reached a peak of more than 700 people in the early 1990s but has been on a slow decline ever since.
I’d come to the island in part to talk to Aquilina Lestenkof, a historian deeply involved in language preservation. I found her on a rainy afternoon in the bright blue wood-walled civic center, which is a warren of classrooms and offices, crowded with books, artifacts, and historic photographs. She greeted me with a word that starts at the back of the throat and rhymes with “song.” “Aang,” she said.
Lestenkof moved from St. George, where she was born, to St. Paul when she was four. Her father, who was also born in St. George, became the village priest. She had long salt-and-pepper hair and a tattoo that stretched across both her cheeks made of curved lines and dots. Each dot represents an island where a generation of her family lived, beginning with Attu in the Aleutians, then traveling to the Russian Commander Islands—also a site of a slave sealing operation—as well as Atka, Unalaska, St. George, and St. Paul.
“I’m the fifth generation having my story travel through those six islands,” she said.
Lestenkof is a grandmother, related to a good many people in the village and married to the city manager. For the past 10 years she’s been working on revitalizing Unangam Tunuu, the Indigenous language. Only one elder in the village speaks fluently now. He’s among the fewer than 100 fluent speakers left on the planet, though many people in the village understand and speak some words.
Back in the 1920s, teachers in the government school put hot sauce on her father’s tongue for speaking Unangam Tunuu, she told me. He didn’t require his children to learn it. There’s a way that language shapes how you understand the land and community around you, she said, and she wanted to preserve the parts of that she could.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “[My father] said, ‘If you thought in our language, if you thought from our perspective, you’d know what I’m talking about,’” she said. “I felt cheated.” She showed me a wall covered with rectangles of paper that tracked grammar in Unangam Tunuu. Lestenkof said she needed to hunt down a fluent speaker to check the grammar. Say you wanted to say “drinking coffee,” she explained. You might learn that you don’t need to add the word for “drinking.” Instead, you might be able to change the noun to a verb just by adding an ending to it.
Her program had been supported by money from a local nonprofit invested in crabbing and, more recently, by grants, but she was recently informed that she may lose funding. Her students come from the village school, which is shrinking along with the population. I asked her what would happen if the crabs fail to come back. People could survive, she said, but the village would look very different.
“Sometimes I’ve pondered, is it even right to have 500 people on this island?” she said.
If people moved off, I asked her, who would keep track of its history? “Oh, so we don’t repeat it?” she asked, laughing. “We repeat history. We repeat stupid history, too.” Until recently, during the crab season, the Bering Sea fleet had some 70 boats, most of them ported out of Washington state, with crews that came from all over the US. Few villagers work in the industry, in part because the job only lasts for a short season. Instead, they fish commercially for halibut, have positions in the local government or the tribe, or work in tourism. Processing is hard, physical labor—a schedule might be seven days a week, 12 hours a day, with an average pay of $17 an hour. As with lots of processors in Alaska, nonresident workers on temporary visas from the Philippines, Mexico, and Eastern Europe fill many of the jobs.
The crab plant echoes the dynamics of commercial sealing, she said. Its workers leave their homeland, working hard labor for low pay. It was one more industry depleting Alaska’s resources and sending them across the globe. Maybe the system didn’t serve Alaskans in a lasting way. Do people eating crab know how far it travels to the plate? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “We have the seas feeding people in freakin’ Iowa,” she said. “They shouldn’t be eating it. Get your own food.” Ocean temperatures are increasing all over the world, but sea surface temperature change is most dramatic in the high latitudes of the Northern Hemisphere. As the North Pacific experiences sustained increases in temperature, it also warms up the Bering Sea to the north, through marine heat waves. During the past decade, these heat waves have grown more frequent and longer-lasting than at any time since record-keeping began more than 100 years ago. Scientists expect this trend to continue.
A marine heat wave in the Bering Sea between 2016 and 2019 brought record warmth, preventing ice formation for several winters and affecting numerous cold-water species, including Pacific cod and pollock, seals, seabirds, and several types of crab.
Snow crab stocks always vary, but in 2018 a survey indicated that the snow crab population had exploded—it showed a 60 percent boost in market-sized male crab. (Only males of a certain size are harvested.) The next year showed abundance had fallen by 50 percent. The survey skipped a year due to the pandemic. Then, in 2021, the survey showed that the male snow crab population had dropped by more than 90 percent from its high point in 2018. All major Bering Sea crab stocks, including red king crab and bairdi crab, were way down too. The most recent survey showed a decline in snow crabs from 11.7 billion in 2018 to 1.9 billion in 2022.
Scientists think a large pulse of young snow crabs came just before years of abnormally warm water temperatures, which led to less sea ice formation. One hypothesis is that these warmer temperatures drew sea animals from warmer climates north, displacing cold water animals, including commercial species like crab, pollock, and cod.
Another has to do with food availability. Crabs depend on cold water—water that’s 2 degrees Celsius (35.6 degrees Fahrenheit), to be exact—that comes from storms and ice melt, forming cold pools on the bottom of the ocean. Scientists theorize that cold water slows crabs’ metabolisms, reducing their need for food. But with the warmer water on the bottom, they needed more food than was available. It’s possible they starved or cannibalized each other, leading to the crash now underway. Either way, warmer temperatures were key. And there’s every indication temperatures will continue to increase with global warming.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “If we’ve lost the ice, we’ve lost the 2-degree water,” Michael Litzow, shellfish assessment program manager with the National Oceanic and Atmospheric Administration, told me. “Cold water, it’s their niche—they’re an Arctic animal.” The snow crab may rebound in a few years, so long as there aren’t any periods of warm water. But if warming trends continue, as scientists predict, the marine heat waves will return, pressuring the crab population again.
Bones litter the wild part of St. Paul Island like Ezekiel’s Valley in the Old Testament—reindeer ribs, seal teeth, fox femurs, whale vertebrae, and air-light bird skulls hide in the grass and along the rocky beaches, evidence of the bounty of wildlife and 200 years of killing seals.
When I went to visit Phil Zavadil, the city manager and Aqualina’s husband, in his office, I found a couple of sea lion shoulder bones on a coffee table. Called “yes/no” bones, they have a fin along the top and a heavy ball at one end. In St. Paul, they function like a magic eight ball. If you drop one and it falls with the fin pointing right, the answer to your question is yes. If it falls pointing left, the answer is no. One large one said “City of St. Paul Big-Decision Maker.” The other one was labeled “budget bone.” The long-term health of the town, Zavadil told me, wasn’t in a totally dire position yet when it came to the sudden loss of the crab. It had invested during the heyday of crabbing and with a somewhat reduced budget could likely sustain itself for a decade.
“That’s if something drastic doesn’t happen. If we don’t have to make drastic cuts,” he said. “Hopefully the crab will come back at some level.” The easiest economic solution for the collapse of the crab fishery would be to convert the plant to process other fish, Zavadil said. There were some regulatory hurdles, but they weren’t insurmountable. City leaders were also exploring mariculture—raising seaweed, sea cucumbers, and sea urchins. That would require finding a market and testing mariculture methods in St. Paul’s waters. The fastest timeline for that was maybe three years, he said. Or they could promote tourism. The island has about 300 tourists a year, most of them hardcore birders.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “But you think about just doubling that,” he said.
The trick was to stabilize the economy before too many working-age adults moved away. There were already more jobs than people to fill them. Older people were passing away, younger families were moving out.
“I had someone come up to me the other day and say, ‘The village is dying,’” he said, but he didn’t see it that way. There were still people working and lots of solutions to try.
“There is cause for alarm if we do nothing,” he said. “We’re trying to work on things and take action the best we can.” Aquilina Lestenkof’s nephew, Aaron Lestenkof, is an island sentinel with the tribal government, a job that entails monitoring wildlife and overseeing the removal of an endless stream of trash that washes up ashore. He drove me along a bumpy road down the coast to see the beaches that would soon be noisy and crowded with seals.
We parked, and I followed him to a wide field of nubby vegetation stinking of seal scat. A handful of seal heads popped up over the rocks. They eyed us, then shimmied into the surf.
In the old days, Alaska Native seal workers used to walk out onto the crowded beaches, club the animals in the head, and then stab them in the heart. They took the pelts and harvested some meat for food, but some went to waste. Aquilina Lestenkof told me taking animals like that ran counter to how Unangax̂ related to the natural world before the Russians came.
“You have a prayer or ceremony attached to taking the life of an animal—you connect to it by putting the head back in the water,” she said.
Slaughtering seals for pelts made people numb, she told me. The numbness passed from one generation to the next. The era of crabbing had been in some ways a reparation for all the years of exploitation, she said. Climate change brought new, more complex problems.
I asked Aaron Lestenkof if his elders ever talked about the time in the detention camp where they were sent during World War II. He told me his grandfather, Aquilina’s father, sometimes recalled a painful experience of having to drown rats in a bucket there. The act of killing animals that way was compulsory—the camp had become overrun with rats—but it felt like an ominous affront to the natural order, a trespass he’d pay for later. Every human action in nature has consequences, he often said. Later, when he lost his son, he remembered drowning the rats.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “Over at the harbor, he was playing and the waves were sweeping over the dock there. He got swept out and he was never found,” Aaron Lestenkof said. “That’s, like, the only story I remember him telling.” We picked our way down a rocky beach littered with trash—faded coral buoys, disembodied plastic fishing gloves and boots, an old ship’s dishwasher lolling open. He said the animals around the island were changing in small ways. There were fewer birds now. A handful of seals were now living on the island year-round, instead of migrating south. Their population was also declining.
People still fish, hunt marine mammals, collect eggs, and pick berries. Aaron Lestenkof hunts red-legged kittiwakes and king eiders, though he doesn’t have a taste for the bird meat. He finds elders who do like them, but that’s gotten harder. He wasn’t looking forward to the lean years of waiting for the crabs to return. Proceeds from the community’s investment in crabbing boats had paid the heating bills of older people; the boats also supplied the elderly with crab and halibut for their freezers. They supported education programs and environmental cleanup efforts. But now, he said, having the crab gone would “affect our income and the community.” Aaron Lestenkof was optimistic that they might cultivate other industries and grow tourism. He hoped so, because he never wanted to leave the island. His daughter was away at boarding school because there was no in-person high school any more. He hoped, when she grew up, that she’d want to return and make her life in town.
On Sunday morning, the 148-year-old church bell at Saints Peter and Paul Russian Orthodox Church tolled through the fog. A handful of older women and men filtered in and stood on separate sides of the church among gilded portraits of the saints. The church has been part of village life since the beginning of Russian occupation, one of the few places, people said, where Unangam Tunuu was welcome.
A priest sometimes travels to the island, but that day George Pletnikoff Jr, a local, acted as subdeacon, singing the 90-minute service in English, Church Slavonic, and Unangam Tunuu. George helps with Aquilina Lestenkof’s language class. He is newly married with a 6-month-old baby.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg After the service, he told me that maybe people weren’t supposed to live on the island. Maybe they needed to leave that piece of history behind.
“This is a traumatized place,” he said.
It was only a matter of time until the fishing economy didn’t serve the village anymore and the cost of living would make it hard for people to stay, he said. He thought he’d move his family south to the Aleutians, where his ancestors came from.
“Nikolski, Unalaska,” he told me. “The motherland.” The next day, just before I headed to the airport, I stopped back at Aquilina Lestenkof’s classroom. A handful of middle school students arrived, wearing oversize sweatshirts and high-top Nikes. She invited me into a circle where students introduced themselves in Unangam Tunuu, using hand gestures that helped them remember the words.
After a while, I followed the class to a work table. Lestenkof guided them, pulling a needle through a papery dried seal esophagus to sew a waterproof pouch. The idea was that they’d practice words and skills that generations before them had carried from island to island, hearing and feeling them until they became so automatic they could teach them to their own children.
This story was produced in collaboration with the Food & Environment Reporting Network, a nonprofit news organization.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Topics Climate Desk climate climate change marine science oceans fish Alaska environment Ecology Matt Simon Matt Simon Ramin Skibba Grace Browne Amit Katwala Ramin Skibba Jim Robbins Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |