labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
3
Tech and the fine art of complicity
What are the opportunities for technology in the arts – and what threats does it pose? Knight Foundation, which recently launched an open call for ideas to use technology to connect people to the arts , asked leaders in the field to answer that question. Here, Jer Thorp, a data artist and innovator-in-residence at the Library of Congress, answers with his own, provocative inquiry. As an artist who works with technology, my daily practice is punctuated with questions: Which natural language processing algorithm should I be using? How well will this cheap webcam function in low light? What exactly does that error message mean? Lately, though, I’ve been spending a lot of time dwelling on one question in particular: Specifically, how are the works that I’m creating, the methods that I’m using to create them, and the ways that I am presenting them to the world supporting the agendas of Silicon Valley and the adjacent power structures of government? Tech-based artist Zach Lieberman said in 2011 that “artists are the R&D department of humanity.” This might be true, but it’s not necessarily a good thing. In a field that is eager to demonstrate technical innovation, artists are often the first to uncover methodologies and approaches that, while intriguing, are fundamentally problematic. I learned this first hand after I released Just Landed in 2009, a piece that analyzed tweets to extract patterns of travel. While my intentions were good, I later learned that several advertisers saw the piece as a kind of instructional video for how personal data could be scraped, unbeknownst to users, from a Twitter feed. At no point in the development of Just Landed did I stop to consider the implications of the code I was writing, and how it might go on to be used. Unbeknownst to me, I was complicit in the acceleration of dangerous “big data” practices. Complicity can also come though the medium in which we present our work. In 2017, Jeff Koons partnered with Snapchat to produce an Augmented Reality art piece, which rendered Koons’ iconic structures in real-world locations, visibly only through a Snapchat Lens. Though Koons has never been afraid to embrace commercialism, this project forces the user to embrace it as well: to view the artwork, people need to purchase specific hardware, install Snapchat, and agree to a Terms of Service that has concerning implications on user privacy and creative ownership. It seems clear to me in my own practice that any artwork that requires a user to agree to commercial use of personal data is fundamentally dangerous. Google’s Art and Culture app uses face recognition to match images to artworks. Left: A police officer uses glasses enabled with facial recognition at Zhengzhou East Railway Station. Right: A matched face extracted from “Gongginori” by Kim Jun-geun. Screen shot by Jer Thorp. In January, Google added a feature to its heretofore unnoticed Arts and Culture app which allowed users to match their own selfies to artworks held by museums and galleries around the world. This project, though phenomenally popular, was rightly criticized for offering a depressingly thin experience for people of color, who are severely underrepresented in museum holdings. As well as being a prejudiced gatekeeper to the art experience, this tool seems to me to play another dangerous role: as a cheerleader for facial recognition. This playfully soft demonstration of face recognition is like using a taser to power a merry-go-round: it works, and it might be good PR, but it is certainly not matched to the real-world purpose of the technology. Five years ago, artist Allison Burtch coined the term ‘cop art’ to describe tech-based works that abuse the same unbalanced power structures they are criticizing. Specifically, she pointed to artworks that surveil the viewer, and asked the question: How is the artist any different from the police? As Facebook, Twitter and Google face a long-awaited moral reckoning, artists using technology must also be examining ourselves and our methods critically. How are we different from these corporations? How are we complicit? For more about the open call for ideas on art and technology, visit prototypefund.org . Email Yes, subscribe me to this newsletter.p Subscription Options
2
Canon’s flagship DSLR line will end with the EOS-1D X Mark III, eventually
By span Richard Lawler , p Dec 30, 2021, 11:48 PM UTC Comments Share this story When Canon revealed the EOS-1D X Mark III in January 2020, we proclaimed that the DSLR “still isn’t dead,” but that camera will mark the end of the line for a flagship model that some pro photographers still swear by to capture everything from sporting events to wild animals. An end to the production and development timeline of the EOS-1 is estimated as “within a few years.” CanonRumors points out an interview Canon’s chairman and CEO Fujio Mitarai gave this week to the Japanese newspaper  Yomiuri Shimbun (via Y.M. Cinema Magazine ). The piece highlight how high-end mirrorless interchangeable-lens cameras have taken market share digital single-lens reflex (DSLR) cameras previously dominated. In it, the CEO is quoted (in Japanese, which we’ve translated to English) saying “Market needs are rapidly moving toward mirrorless cameras. So accordingly, we’re increasingly moving people in that direction.” The article states that the Mark III is “in fact” the last model in Canon’s flagship EOS-1 series and that in a few years Canon will stop developing and producing its flagship DSLR cameras in favor of mirrorless cameras. However, despite what some headlines say, it doesn’t mean this is the end of Canon DSLRs (yet). While the article makes it plain that mirrorless cameras like Canon’s own EOS R3 represent the future of the segment, it also says that because of strong overseas demand, the company plans to continue making intro/mid-range DSLR cameras for the time being. As for the Mark III itself, while a new model is not around the corner, its estimated lifespan as an active product is still measured in years. In a statement given to The Verge, a Canon US spokesperson confirmed, “The broad details of Mr. Mitarai’s interview as described in the article are true. However, while estimated as “within a few years,” exact dates are not confirmed for the conclusion of development/termination of production for a flagship DSLR camera.” Comments
1
Florence Medieval Wine Windows Are Open Again to Serve All Kind of Things
Member-only story Regia Marinho p Follow 4 min read p Aug 10, 2020 -- Share Ideas from the past… From Florence, Italy. Bars and restaurants to serve food and drinks with socially distancing, they’re bringing back the tiny windows used during the 17th century plague. Wine windows, known locally as buchette del vino, are small hatches carved into the walls of over 150 buildings in Florence… Follow 17.6K Followers I write about ideas, technology, the future and inspire the world through art. Artist. Civil engineer. https://regiaart.com Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
2
An open source TTRPG town generator that outputs text ready to read out
Your browser lacks required capabilities. Please upgrade it or switch to another to continue. Loading…
35
Facebook’s Pushback: Stem the Leaks, Spin the Politics, Don’t Say Sorry
pWin McNamee/Getty Images The Facebook Files Facebook’s Pushback: Stem the Leaks, Spin the Politics, Don’t Say Sorry Chief Executive Mark Zuckerberg drove response to disclosures about company’s influence; sending deputies to testify in Congress pWin McNamee/Getty Images The day after former Facebook employee and whistleblower Frances Haugen went public in October, the company’s team in Washington started working the phones. Continue reading your article with a WSJ subscription Subscribe Now Already a subscriber? Sign In
3
The Cost of Digital Consumption
✖ p Carbon History Aatish Bhatia 01 p Behind Climate Change Geoffrey Litt, Seth Thompson 02 p of Planting One Trillion Trees Benjamin Cooley 03 p Christina Orieschnig 04 p of Digital Consumption Halden Lin, Aishwarya Nirmal, Shobhit Hathi, Lilian Liang 05 Editors’ Note In 2014, Google had to shark-proof their underwater cables. It turned out that sharks were fond of chewing on the high-speed cables that make up the internet. While these “attacks” are no longer an issue, they are a reminder that both the internet and sharks share the same home. Book → E-Book → It is difficult to ignore the environmental consequences of ordering a physical book on Amazon. But what happens when you download an E-Book instead? When you buy a book on Amazon, the environmental consequences are difficult to ignore. From the jet fuel and gasoline burned to transport the packages, to the cardboard boxes cluttering your house, the whole process is filled with reminders that your actions have tangible, permanent consequences. Download that book onto an e-reader, however, and this trail of evidence seems to vanish. The reality is, just as a package purchased on Amazon must be transported through warehouses and shipping centers to get to your home, a downloaded book must be transported from data centers and networks across the world to arrive on your screen. Book → E-Book → It is difficult to ignore the environmental consequences of ordering a physical book on Amazon. But what happens when you download an E-Book instead? And that’s the thing: your digital actions are only possible because of the physical infrastructure built to support them. Digital actions are physical ones. This means that Google has to be mindful of sharks. This also means that we should be mindful of the carbon emissions produced by running the world’s data centers and internet networks. According to one study, the Information and Communications Technology sector is expected to account for 3–3.6% of global greenhouse gas emissions in 2020 [1]. That’s more than the fuel emissions for all air travel in 2019, which clocked in at 2.5%. The answer to “why?” and “how?” may not be immediately obvious, but that’s not the fault of consumers. A well-designed, frictionless digital experience means that users don’t need to worry about what happens behind the scenes and, by extension, the consequences. This is problematic: the idea of “hidden costs” runs contrary to principles of environmental awareness. Understanding how these digital products and services work is a crucial first step towards addressing their environmental impact. Get updates from the Parametric Press Each type of digital activity produces different levels of emissions. The amount of carbon dioxide emitted by a particular digital activity is a factor of the quantity of information that needs to be loaded. More specifically, we can estimate emissions using the following formula: n n n bytes × \times p X X X KwH/byte × \times p Y Y Y g CO₂/KwH A byte is a unit for information. X = 6 × 1 0 − 1 1 X = 6 \times 10^{-11} X = 6 × 1 0 − 1 1 is the global average for energy consumed for transmitting one byte of data in 2015, as calculated by Aslan et al. (2017) [2]. Y = 7 0 7 Y = 707 Y = 7 0 7 represents the EPA’s U.S. national weighted average for grams of CO₂ emitted for electricity consumed. Ideally, this formula would also include the energy usage of the data source and your device, but these will vary across digital media providers, users, and activities. This formula therefore provides a reasonable lower bound for emissions. Emissions of Websites Each bar represents the carbon emitted when scrolling through a website for 60 seconds. Car distance equivalent is calculated using the fuel economy of an average car. each bar to show a preview clip of the scroll.Click Tap In order to load the Parametric Press article that you are currently reading, we estimate that 51 milligrams of CO₂ were produced. The same amount of CO₂ would be produced by driving a car 0.20 meters (based on the fuel economy of an average car, according to the EPA). These emissions are a result of loading data for the text, graphics, and visualizations that are then rendered on your device. The chart to the right below displays the carbon emitted when loading various websites and scrolling through each at a constant speed for 60 seconds (this scrolling may incur more loading, depending on the site). All data collection was done using a Chrome web browser under a 100mbps Wi-Fi connection. Clicking Tapping each bar in the chart will show a preview of that scroll, sped up to a 5 second clip. Emissions of Websites Each bar represents the carbon emitted when scrolling through a website for 60 seconds. Car distance equivalent is calculated using the fuel economy of an average car, according to the EPA. a bar to show a preview clip of the scroll. Click Tap As shown to the right above, loading websites like Google, which primarily show text, produces much lower emissions than loading websites like Facebook, which load many photos and videos to your device. Emissions of Audio Each bar represents the carbon emitted when listening to an audio clip for 60 seconds (no audio preview). Note: The amount of data loaded may represent more than a minute’s worth due to buffering. Let’s take a closer look at one common type of non-text media: audio. When you listen to audio on your device, you generally load a version of the original audio file that has been compressed into a smaller size. In practice, this size is often determined by the choice of bitrate, which refers to the average amount of information in a unit of time. Emissions of Audio Each bar represents the carbon emitted when listening to an audio clip for 60 seconds (no audio preview). Note: The amount of data loaded may represent more than a minute’s worth due to buffering. The NPR podcast shown in the visualization was compressed to a bitrate of 128 kilobits per second (there are 8 bits in a byte), while the song “Old Town Road”, retrieved from Spotify, was compressed to 256 kilobits per second. This means that in the one minute timespan that both audio files were played, roughly twice as much data needed to be loaded for “Old Town Road” than for “Digging into ‘American Dirt’”, which leads to the song having about twice as large of a carbon footprint. The fact that the song has greater carbon emissions is not a reflection on the carbon footprint of songs versus podcasts, but rather the difference in the bitrate of each audio file. These audio examples have lower carbon emissions than most of the multimedia websites shown earlier. Emissions of Video Each bar represents the carbon emitted when watching a video for 60 seconds at two different qualities, 360p and 1080p (to reduce the size of this article, there is only one 360p preview for both qualities). Note: like audio, videos are buffered, which means that playing the video may have loaded more than 60 seconds of content. Videos are a particularly heavy digital medium. The chart to the right below shows emissions for streaming different YouTube videos at two different qualities—360p and 1080p, for 60 seconds each. Emissions of Video Each bar represents the carbon emitted when watching a video for 60 seconds at two different qualities, 360p and 1080p (to reduce the size of this article, there is only one 360p preview for both qualities). Note: like audio, videos are buffered, which means that playing the video may have loaded more than 60 seconds of content. When you view a video at a higher quality, you receive a clearer image on your device because the video that you load contains more pixels. Pixels are units of visual information. In the chart, the number in front of the “p” for quality refers to the height, in pixels, of the video. This is why there are greater emissions for videos at 1080p than those at 360p: more pixels means more data loaded per frame. Bitrate also plays a role in video streaming. Here, bitrate refers to the amount of visual and audio data loaded to your device over some timespan. From the chart, it is clear that the “Old Town Road” music video has a higher bitrate than the 3Blue1Brown animation at both qualities. This difference could be attributed to a variety of factors, such as the frame rate, compression algorithm, and the producers’ desired video fidelity. In the examples provided, videos produced far more CO₂ than audio over the same time span. This is especially apparent when comparing the emissions for the audio of “Old Town Road” and its corresponding music video. Ads are another common form of digital content. Digital media providers often include advertisements as a method of generating revenue. When the European version of the USA Today website removed ads and tracking scripts to comply with GDPR (General Data Protection Regulation), the size of its website decreased from 5000 to 500 kilobytes with no significant changes to the appearance of the site. This means that the p. Not only does a video require loading both audio and visual data, but visual data is also particularly heavy in information. Notice that loading the website Facebook produced the most emissions, likely a result of loading multiple videos and other heavy data. Ads are another common form of digital content. Digital media providers often include advertisements as a method of generating revenue. When the European version of the USA Today website removed ads and tracking scripts to comply with GDPR (General Data Protection Regulation), the size of its website decreased from 5000 to 500 kilobytes with no significant changes to the appearance of the site. This means that the p. Media Emissions by Medium on the “Show Timeline” button to see a timeline of loading packets for each type of media. to scrub through and view cumulative emissions. Click TapHover Tap and drag Show Timeline In a lot of cases, when you stream content online, you don’t receive all of the information for that content at once. Instead, your device loads incremental pieces of the data as you consume the media. These pieces are called packets. In each media emission visualization, we estimated emissions based on the size and quantity of the packets needed to load each type of media. Click Tap the “Show Timeline” button at the bottom of the visualization to the right below to see a timeline breakdown of each type of media. Media Emissions by Medium on the “Show Timeline” button to see a timeline of loading packets for each type of media. to scrub through and view cumulative emissions. Click TapHover Tap and drag Show Timeline In this timeline breakdown, we can see that the way in which packets arrive for video and audio differs from the pattern for websites. When playing video and audio, packets tend to travel to your device at regular intervals. In contrast, the packets for websites are weighted more heavily towards the beginning of the timeline, but websites may make more requests for data as you scroll through and load more content. We’ve just seen how digital streams are made up of packets of data sent over the internet. These packets aren’t delivered by magic. Every digital streaming platform relies on a system of computers and cables, each part consuming electricity and releasing carbon emissions. When we understand how these platforms deliver their content, we can directly link our digital actions to the physical infrastructure that releases carbon dioxide. Let’s take a look at YouTube. With its 2 billion + users, YouTube is the prototypical digital streaming service. How does one of its videos arrive on your screen? YouTube Pipeline + Electricity Usage for 2016 Origin Data Centers 350 GWh placeholder 1,900 GWh placeholder 4,400 GWh placeholder 8,500 GWh placeholder 6,100 GWh placeholder Enough to power over U.S. homes for a year YouTube (2016) [Preist et al. 2017] 19.6 TWh placeholder 19.6 TWh placeholder 19.6 TWh placeholder ICT Sector (2020, projected) [Belkhir & Elmeligi 2017] 19.6 TWh placeholder 19.6 TWh placeholder 19.6 TWh placeholder ICT Sector (2010) [Belkhir & Elmeligi 2017] Videos are stored on servers called “data centers”: warehouses full of giant computers designed for storing and distributing videos. For global services, these are often placed around the world. YouTube’s parent company, Google, has 21 origin data centers strategically placed throughout 4 continents (North America, South America, Europe, and Asia). Let’s take a closer look at one of these origin data centers. This one is in The Dalles, Oregon, on the West Coast of the United States. For information to get from this data center to you, it first goes through Google’s own specialized data network to what they call Edge Points of Presence (POPs for short), which bring data closer to high traffic areas. There are three metro areas with POPs in this region: Seattle, San Francisco, and San Jose. Show worldwide POPs From these POPs, data is routed through smaller data centers that form the “Google Global Cache” (GGC). These data centers are responsible for storing the more popular or recently watched videos for users in a given area, ensuring no single data center is overwhelmed and service stays zippy. There are 22 in the region shown on the map. A more general term for this collection of smaller data centers is a Content Delivery Network (CDN for short). Show entire GGC In 2018, researchers from the University of Bristol used publically available data to estimate the energy consumption of each step of YouTube’s pipeline in 2016. Google does not disclose its data center energy consumption for YouTube traffic specifically. Therefore, Chris Preist, Daniel Schien and Paul Shabajee used the energy consumption numbers released for a similar service’s (Netflix) data centers to estimate YouTube’s data center energy consumption. They found that all data centers accounted for less than 2% of YouTube’s electricity use in 2016 [3]. Google doesn’t have their own network for communication between POPs and the Google Global Cache. For that, they use the internet. The internet is a global “highway of information” that allows packets of data to be transmitted as electrical impulses. A packet is routed from a source computer, through cables and intermediary computers, before arriving at its destination. In addition to the 550,000 miles of underwater cables that form the backbone of the internet, regions have their own land-based networks. Here’s (roughly) what the major internet lines of the west coast look like [4]. Perhaps not surprisingly, it resembles our interstate highway system. Preist et al. estimate that this infrastructure consumed approximately 1,900 Gigawatt-hours of electricity to serve YouTube videos in 2016 [3], enough to power 170,000 homes in the United States for a year, according to the EIA . The packets traveling across this information highway need “off-ramps” to reach your screen. The off-ramps that packets take are either “fixed line” residential networks (wired connections from homes to the internet) or cellular networks (wireless connections from cell phones to the internet). The physical infrastructure making up these two types of networks differ, and therefore have distinct profiles of energy consumption and carbon emissions. An estimated 88% of YouTube’s traffic went through fixed line networks (from your residential Cable, DSL, or Fiber-Optic providers), and this accounted for approximately 4,400 Gigawatt-hours of electricity usage [3]—enough to power over 400,000 U.S. homes. In comparison, only 12% of YouTube’s traffic went through cellular networks, but they were by far the most expensive part of YouTube’s content delivery pipeline, accounting for approximately 8,500 Gigawatt-hours of electricity usage—enough to power over 750,000 U.S. homes [3]. At over 10 times the electricity usage per unit of traffic, the relative inefficiency of cellular transmission is clear. Eventually, the video data reaches your device for viewing. While your device might not technically be part of YouTube’s content delivery pipeline, we can’t overlook the cost of moving those pixels. Devices accounted for an estimated 6,100 Gigawatt-hours of electricity usage [3]: that’s over half a million U.S. homes worth of electricity. In total, Preist et al.’s research estimated that YouTube traffic consumed 19.6 Tetrawatt-hours of electricity in 2016 [3]. Using the world emissions factor for electricity generation as reported by the International Energy Agency, they place the resulting carbon emissions at 10.2 million metric tons of CO₂ (offset to 10.1 after Google’s renewable energy purchases for its data center activities). YouTube emitted nearly as much CO₂ as a metropolitan area like Auckland, New Zealand did in 2016. Put in other words, 10.2 MtCO₂ is equivalent to the yearly footprint of approximately 2.2 million cars in the United States. YouTube’s monthly active user count has since increased by a minimum of 33% since 2016, (from 1.5 billion a year later in 2017 to over 2 billion last year in 2019). This means its current CO₂ emissions in 2020 could be even higher than Preist et al.’s 2016 prediction. If we assume that the emissions factor for electricity usage is similar for each part of the pipeline, we can get a rough idea of the carbon footprint profile of YouTube. However, it’s important to note that this breakdown isn’t necessarily representative of the entirety of the information and technology sector. A 2018 study by McMaster University researchers Lotfi Belkhir and Ahmed Elmeligi paints a surprisingly different picture for the sector as a whole [1]. To compare the two studies, we can group the “Internet”, “Residential Network”, and “Cellular Network” sections into an umbrella “Networks”. Belkhir and Elmeligi provide emission estimates for both 2010 (retrospective) and 2020 (prospective). Most surprising is the weight Data Centers and CDNs have in this breakdown. We can speculate that the relatively high bandwidth required to transfer videos as a medium contributes at least partially to the overweightedness of the “Networks” category for YouTube. In the same study, Belkhir and Elmeligi created two models to project the ICT sector’s emissions decades forward. Even their “unrealistically conservative” linear model put tech at a 6–7% share of GHG emissions in 2040, and their exponential model had tech reaching over 14%. What is being done about this? Aside from increasing the efficiency of each part of the pipeline and taking advantage of renewable energy, mindful design could also go a long way. In the context of digital streaming, Preist et al. point out that much of YouTube’s traffic comes from music videos, and a good number of those “views” are likely just “listens”. If this “listen” to “view” ratio were even just 10%, YouTube could have reduced its carbon footprint by about 117 thousand tons of CO₂ in 2016, just by intelligently sending audio when no video is required. That’s over 2.2 million gallons of gasoline worth of CO₂ in savings. Digital streaming is not the only instance where environmentally harmful aspects of technology are outside of public consciousness. Tech is rarely perceived as environmentally toxic, but here’s a surprising fact: Santa Clara County, the heart of “Silicon Valley”, has the most EPA classified “superfund” (highly polluted) sites in the nation. These 23 locations may be impossible to fully clean. Silicon Valley is primarily to blame.
2
New Approach to Identify Genetic Boundaries of Species Could Impact Policy
Evolutionary biologists have developed a new computational approach to genomic species delineation that improves upon current methods and could impact policy in the future. Evolutionary biologists model the process of speciation, which follows population formation, improving on current species delineation methods. By Story Keywords: Sciences, Biology, Research, News A new approach to genomic species delineation could impact policy and lend clarity to legislation for designating a species as endangered or at risk. The coastal California gnatcatcher is an unassuming little gray songbird that’s been at the epicenter of a legal brawl for nearly 28 years, ever since the U.S. Fish and Wildlife Service listed it as threatened under the Endangered Species Act. Found along the Baja California coast, from El Rosario, Mexico to Long Beach, Calif., its natural habitat is the rapidly declining coastal sagebrush that occupies prime, pristine real estate along the West Coast. When Polioptila californica was granted protection, the region’s real estate developers sued to delist it. Central to their argument, which was dismissed in a federal court, was whether it was an independent species or just another population of a more widely found gnatcatcher. This distinction would dictate its threatened status. Evolutionary biologists have developed a new computational approach to genomic species delineation that improves upon current methods and could impact similar policy in the future. This approach is based on the fact that in many groups of organisms it can be problematic to decide where one species begins and another ends. “In the past, when it was challenging to distinguish species based on external characters, scientists relied on approaches that diagnosed signatures in the genome to identify ‘breaks’ or ‘structure’ in gene flow indicative of population separation. The problem is this method doesn’t distinguish between two populations separated geographically versus two populations being two different species,” said Jeet Sukumaran, computational evolutionary biologist at San Diego State University and lead author of a study published May 13 in PloS Computational Biology. “Our method, DELINEATE, introduces a way to distinguish between these two factors, which is important because most of the natural resources management policy and legislature in our society rests on clearly defined and named species units.” Typically, scientists will use a range of different methods to identify boundaries between different species, including statistical analysis and  qualitative data to distinguish between population-level variation and species-level variation in their samples, to complete the classification of an organism. In cases where it is difficult to sort the variation between individuals into differences due to variation within a species, they often turn to genomic data based approaches for the answer. This is when scientists often use a model that generates an evolutionary tree relating different populations. Sukumaran and co-authors evolutionary biologists with the University of Michigan, Ann Arbor and with the University of Kansas, Lawrence add a second layer of information to this approach to explicitly model the actual speciation process. This allows them to understand how separate populations sometimes evolve into distinct species, which is the basis for distinguishing between populations and species in the data.L. Lacey KnowlesMark Holder "Our method allows researchers to make statements about how confident they are that two populations are members of the same species,” Holder said. “That is an advance over just making a best estimate of species assignments." Whether some of the population lineages in the sample are assigned to existing species or classified as entirely new species depends on two factors. One is the age of the population isolation events such as the splitting of an ancestral population into multiple daughter populations, which is how species are “born” in an extended process of speciation. The other is the rate of speciation completion, which is the rate at which the nascent or incipient species “born” from population splitting events develop into true full species. Organisms can look alike, Sukumaran said, “even though they are actually distinct species separated by many tens or hundreds of thousands or even millions of years of evolution.” “When rivers change course, when terrain changes, previously cohesive populations get fragmented, and the genetic makeup of the two separate populations, now each a population in their own right, can diverge,” Sukumaran said. “Eventually, one or both these populations may evolve into separate species, and may (or may not) already have reached this status by the time we look at them. “Yet individuals of these two populations may look identical to us based on their external appearances, as differences in these may not have had time to ‘fix’ in either population. This is when we turn to genomic data to help guide us toward deciding whether we are looking at two populations of the same species, or two separate species.” While scientists agree that it is critical to distinguish between populations and species boundaries in genomic data, there is not always a lot of agreement on how to go about doing it. “If you ask ten biologists, you will get twelve different answers,” Sukumaran said. With this framework, scientists can have a better understanding of the status of any species, but especially of multiple independent species that look alike. The work has implications far beyond real estate battles. Many fields of science and medicine depend on the accurate demarcation and identification of species, including ecology, evolution, conservation and wildlife management, agriculture and pest management, epidemiology and vector-borne disease management etc. These fields also intersect government, legislature and policy, with major implications for the day-to-day lives of broader human society. The DELINEATE model is a first step in a process that will need to be further refined. Funding for this research came from the National Science Foundation.
29
FDA authorizes use of Pfizer's Covid vaccine for 5- to 11-year-olds
toggle caption Jeff Kowalsky/AFP via Getty Images Jeff Kowalsky/AFP via Getty Images The Food and Drug Administration has authorized a Pfizer-BioNTech COVID-19 vaccine for children ages 5 to 11. This lower-dose formulation of the companies' adult vaccine was found to be safe and 90.7% effective in preventing symptomatic COVID-19. The agency acted Friday after a panel of independent scientists advising the FDA strongly supported the authorization on Tuesday. The FDA says the emergency use authorization is based on a study of approximately 4,700 children ages 5 to 11. "As a mother and a physician, I know that parents, caregivers, school staff, and children have been waiting for today's authorization. Vaccinating younger children against COVID-19 will bring us closer to returning to a sense of normalcy," said the FDA's acting commissioner, Dr. Janet Woodcock, in a statement. She went on to assure parents that the agency had rigorously evaluated the data and "this vaccine meets our high standards." Shots - Health News Here's the timeline for the kids COVID vaccine authorization The next step in the process before the vaccine can be released to pediatricians, pharmacies and other distribution points will be a meeting of an advisory panel to the Centers for Disease Control and Prevention next Tuesday. Depending on the outcome of that committee's deliberations, the CDC's director, Dr. Rochelle Walensky, would then have the final say on whether the vaccine can be used and in what circumstances. Once Walensky weighs in, children in this age group could, conceivably, begin to receive their first shot in early November. A dose of the Pfizer vaccine for young children contains one-third the amount of active ingredient used in the vaccine for those 12 years old and up. Children would receive a second dose 21 days or more after their first shot. The vaccine also differs from the existing formulation that teens and adults have been getting in that it can be stored in a refrigerator for up to 10 weeks — making it easier for private medical offices, schools and other locations to keep and administer the vaccine. Children ages 5 to 11 have accounted for approximately 9% of reported COVID-19 cases in the U.S. overall and currently account for approximately 40% of pediatric COVID-19 cases, according to Dr. Doran Fink, clinical deputy director of the FDA's Division of Vaccines and Related Products Applications. Currently, he says, the case rate of COVID-19 among children ages 5 to 11 is "near the highest" of any age group. Unvaccinated children who get COVID-19 can develop a serious complication called multisystem inflammatory syndrome, or MIS-C. More than 5,000 children have gotten the condition so far, according to Dr. Fiona Havers, a medical officer at the CDC who presented data this week to the FDA committee. Shots - Health News Booster shots are here. Take our quiz to see if you need one In deliberations at Tuesday's advisory panel, scientists and clinicians discussed the risks of side effects from the vaccine. Myocarditis and pericarditis — which can occur after viral infections, including COVID-19 — have been seen as rare side effects after vaccination with the two mRNA vaccines, Pfizer and Moderna, especially among young men. In the Pfizer-BioNTech study submitted to the FDA, there were no cases of myocarditis in the children studied. However, given that the highest risk for these rare side effects is among teen males, the agency assessed the risks and benefits of vaccinating younger children and concluded that the benefits of preventing hospitalization from COVID-19 outweigh the possible risks of the side effects. During Tuesday's advisory panel discussion, Capt. Amanda Cohn, a physician and medical officer with the CDC and also a voting member of the FDA committee, said that vaccinating young children against COVID-19 can save lives and keep kids out of the hospital. "We have incredible safety systems in place to monitor for the potential for myocarditis in this age group, and we can respond quickly," she said. "To me, the question is pretty clear. We don't want children to be dying of COVID, even if it is far fewer children than adults, and we don't want them in the ICU." Shots - Health News What you need to know about COVID boosters
4
Geometric Unity
The Theory of Geometric Unity is an attempt by Eric Weinstein to produce a unified field theory by recovering the different, seemingly incompatible geometries of fundamental physics from a general structure with minimal assumptions. On April 1, 2020, Eric prepared for release a video of his 2013 Oxford lecture on Geometric Unity as a special episode of The Portal Podcast. Eric has set April 1 as a new tradition, a day on which we are encouraged to say heretical things that we truly believe, in good faith, without fear of retribution from our employers, institutions, or communities. In this spirit, Eric released the latest draft of his Geometric Unity manuscript on April 1, 2021. An attempt is made to address a stylized question posed to Ernst Strauss by Albert Einstein regarding the amount of freedom present in the construction of our field theoretic universe. What really interests me is whether god had any choice in the creation of the world. Does something unprecedented happen when we finally learn our own source code? View the Full 2013 Oxford Lecture Transcript ➤ Download the GU Draft Add your email address to immediately get the latest version of Eric's Geometric Unity manuscript in your inbox as well as future updates. Email address: Manage Email Preferences
84
Topicctl – an easier way to manage Kafka topics
Today, we're releasing Topicctl , a tool for easy, declarative management of Kafka topics. Here's why. Apache Kafka is a core component of Segment’s infrastructure. Every event that hits the Segment API goes through multiple Kafka topics as it’s processed in our core data pipeline (check out this article for more technical details). At peak times, our largest cluster handles over 3 million messages per second. While Kafka is great at efficiently handling large request loads, configuring and managing clusters can be tedious. This is particularly true for topics, which are the core structures used to group and order messages in a Kafka cluster. The standard interfaces around creating and updating topics aren’t super user-friendly and require a thorough understanding of Kafka’s internals. If you’re not careful, it’s fairly easy to make accidental changes that degrade performance or, in the worst case, cause data loss. These issues aren’t a problem for Kafka experts dealing with a small number of fairly static topics. At Segment, however, we have hundreds of topics across our clusters, and they’re used by dozens of engineering teams. Moreover, the topics themselves are fairly dynamic. New ones are commonly added and existing ones are frequently adjusted to handle load changes, new product features, or production incidents. Previously, most of this complexity was handled by our SRE team. Engineers who wanted to create a new topic or update an existing one would file a ticket in Jira. An SRE team member would then read through the ticket, manually look at the actual cluster state, figure out the set of commands to run to make the desired changes, and then run these in production. From the requester’s perspective, the process was a black box. If there were any later problems, the SRE team would again have to get involved to debug the issues and update the cluster configuration. This system was tedious but generally worked. However, we recently decided to change the layout of the partitions in our largest topics to reduce networking costs. Dealing with this rollout in addition to the usual stream of requests for topic-related changes, each of which would have to be handled manually, would be too much for our small SRE team to deal with. We needed a better way to both apply bigger changes ourselves, while at the same time making it easier for others outside our team to manage their topics. We decided to develop tooling and associated workflows that would make it easier and safer to manage topics. Our desired end-state had the following properties: All configuration lives in git. Topics are defined in a declarative, human-friendly format. Changes are applied via a guided, idempotent process. Most changes are self-service, even for people who aren’t Kafka experts. It’s easy to understand the current state of a cluster. Many of these were shaped by our experiences making AWS changes with Terraform and Kubernetes changes with kubectl. We wanted an equivalent to these for Kafka! We developed a tool, topicctl, that addresses the above requirements for Kafka topic management. We recently open-sourced topicctl, and we’re happy to have others use it and work with us to make it better. The project README has full details on how to configure and run the tool, so we won’t repeat them here. We would, however, like to cover some highlights. As with kubectl, resources are configured in YAML, and changes are made via an apply process. Each apply run compares the state of a topic in the cluster to the desired state in the config. If the two are the same, nothing happens. If there are any differences, topicctl shows them to the user for review, gets approval, and then applies them in the appropriate way: Currently, our topic configs include the following properties: Retention time and other, config-level settings Replica placement strategy The last property is the most complex and was a big motivation for the project. The tool supports static placements as well as strategies that balance leader racks and/or ensure all of the replicas for a partition are in the same rack. We’re actively using these strategies at Segment to improve performance and reduce our networking costs. In addition to orchestrating topic changes, topicctl also makes it easier to understand the current state of topics, brokers, consumer groups, messages, and other entities in a cluster. Based on user feedback, and after evaluating gaps in existing Kafka tooling, we decided that a repl would be a useful interface to provide. We used the c-bata/go-prompt library and other components to create an easy, visually appealing way to explore inside a Kafka cluster: In addition to the repl, topicctl exposes a more traditional “get” interface for displaying information about cluster resources, as well as a “tail” command that provides some basic message tailing functionality. topicctl is implemented in pure Go. Most interaction with the cluster is through ZooKeeper since this is the often the easiest (or sometimes only) choice and is fully compatible with older Kafka versions. Some features, like topic tailing and lag evaluation use higher-level broker APIs. In addition to the go-prompt library mentioned above, the tool makes heavy use of samuel/go-zookeeper (for interaction with ZooKeeper) and the segment/kafka-go library (for interaction with brokers). At Segment, we’ve placed the topicctl configuration for all of our topics in a single git repo. When engineers want to create new topics or update existing ones, they open up a pull request in this repo, get the change approved and merged, and then run topicctl apply via Docker on a bastion host in the target account. Unlike the old method, most changes are self-service, and it’s significantly harder to make mistakes that will cause production incidents. Bigger changes still require involvement from the SRE team, but this is less frequent than before. In addition, the team can use topicctl itself for many of these operations, which is more efficient than the old tooling. Through better tooling and workflows, we were able to reduce the burden on our SRE team and provide a better user experience around the management of Kafka topics. Feel free to check out topicctl and let us know if you have any feedback!
2
Buyers vs. Users
Let’s say you sell a SaaS product, with software developers as the target audience. They get in touch to talk about testing the product in their organization. Even if they are the ones talking to Sales initially and drive a Proof of Concept, are they also the ones buying it? Not necessarily. In some organizations, software developers can make buying decisions, either directly by using the company credit card or through their manager, who purchases based on a majority vote. But in other organizations, the person buying the product is not the same as using it later on. The buyer might be someone from the purchasing department or a C-Level executive. In either case, have you tried talking to them in the same way you talk to developers about your product? I did initially, and it wasn’t fruitful. A CTO doesn’t care too much about all the great features we provide. First of all, a CTO wouldn’t directly use the product, but also, they have different metrics to determine success for themselves. Metrics to determine success? For instance, a developer might determine their success by the number of features delivered, the number of bugs fixed, or how many code reviews they performed. On the other hand, a CTO might be concerned with the efficiency of the entire developer organization or how to save costs. With these metrics in mind, can your product make an impact? What are the benefits that enable cost savings and an increase in efficiency? Now, how is that critical for sales conversations? When you talk to developers in, let’s say, an initial discovery call, you discuss their use cases and see if you might be able to help them. It’s a technical conversation. Then, you need to find out about the next steps and which ultimately makes the purchasing decision. In case somebody else makes the purchasing decision, find out as much as you can about the process. Will there be a meeting? Who else is attending? Can you talk to the decision-maker directly? For these discussions, you need to have your arguments and benefits at hand that matter to them. How can you save costs? How can you enable the team to be more efficient? What is your story? Even if you can’t talk to the decision-maker directly, offer to meet with the developers to prep them for the decision-making meeting with your prepared arguments. It’s common in today’s organizations that you have influencers and advocates in the organization who sell on your behalf. They like the company, the product and now put in the effort to get it purchased. The better you can support them in these discussions, the higher the chance to close a deal. Now, what are your benefits? Sit down and come up with the benefits necessary for the users. Then, do the same again for the buyer. What do they care about, and how can you make their lives easier? h3 Loading...
1
Updates for Supabase Functions
The question on everyone's mind - are we launching Supabase Functions? Well, it's complicated. Today we're announcing part of Functions - Supabase Hooks - in Alpha, for all new projects. We're also releasing support for Postgres Functions and Triggers in our Dashboard, and some timelines for the rest of Supabase Functions. Let's cover the features we're launching today before the item that everyone is waiting for: Supabase Functions. (Not to be confused with Supabase Functions!) Postgres has built-in support for SQL functions. Today we're making it even easier for developers to build PostgreSQL Functions by releasing a native Functions editor. Soon we'll release some handy templates! You can call PostgreSQL Functions with supabase-js using your project API [Docs]: Triggers are another amazing feature of Postgres, which allows you to execute any SQL code after inserting, updating, or deleting data. While triggers are a staple of Database Administrators, they can be a bit complex and hard to use. We plan to change that with a simple interface for building and managing PostgreSQL triggers. They say building a startup is like jumping off a cliff and assembling the plane on the way down. At Supabase it's more like assembling a 747 since, although we're still in Beta, thousands of companies depend on us to power their apps and websites. For the past few months we've been designing Supabase Functions based on our customer feedback. A recurring request from our customers is the ability to trigger their existing Functions. This is especially true for our Enterprise customers, but also Jamstack developers who develop API Functions directly within their stack (like Next.js API routes, or Redwood Serverless Functions). To meet these goals, we're releasing Supabase Functions in stages: (Note: Database Webhooks were previously called "Function Hooks") Today we're releasing Functions Hooks in ALPHA. The ALPHA tag means that it is NOT stable, but it's available for testing and feedback. The API will change, so do not use it for anything critical. You have been warned. Hooks? Triggers? Firestore has the concept of Function Triggers, which are very cool. Supabase Hooks are the same concept, just with a different name. Postgres already has the concept of Triggers, and we thought this would be less confusing . Database Webhooks allow you to "listen" to any change in your tables to trigger an asynchronous Function. You can hook into a few different events:INSERT, UPDATE, and DELETE. All events are fired after a database row is changed. Keen eyes will be able to spot the similarity to Postgres triggers, and that's because Database Webhooks are just a convenience wrapper around triggers. Supabase will support several different targets. If the target is a Serverless function or an HTTP POST request, the payload is automatically generated from the underlying table data. The format matches Supabase Realtime, except in this case you don't a client to "listen" to the changes. This provides yet another mechanism for responding to database changes. As with most of the Supabase platform, we leverage PostgreSQL's native functionality to implement Database Webhooks (previously called "Function Hooks"). To build hooks, we've released a new PostgreSQL Extension, pg_net, an asynchronous networking extension with an emphasis on scalability/throughput. In its initial (unstable) release we expose: The extension is (currently) capable of >300 requests per second and is the networking layer underpinning Database Webhooks. For a complete view of capabilities, check out the docs. pg_net allows you to make asynchronous HTTP requests directly within your SQL queries. After making a request, the extension will return an ID. You can use this ID to collect a response. You can cast the response to JSON within PostgreSQL: To build asynchronous behavior, we use a PostgreSQL background worker with a queue. This, coupled with the libcurl multi interface, enables us to do multiple simultaneous requests in the same background worker process. Shout out to Paul Ramsey, who gave us the implementation idea in pgsql-http. While we originally hoped to add background workers to his extension, the implementation became too cumbersome and we decided to start with a clean slate. The advantage of being async can be seen by making some requests with both extensions: Of course, the sync version waits until each request is completed to return the result, taking around 3.5 seconds for 10 requests; while the async version returns almost immediately in 1.5 milliseconds. This is really important for Supabase hooks, which run requests for every event fired from a SQL trigger - potentially thousands of requests per second. This is only the beginning! First we'll thoroughly test it and make a stable release, then we expect to add support for Database Webhooks is enabled today on all new projects. Find it under Database > Alpha Preview > Database Webhooks.
1
Python Ternary Operator
Python Ternary operators also called conditional expressions, are operators that evaluate something based on a binary condition. Ternary operators provide a shorthand way to write conditional statements, which makes the code more concise. In this tutorial, let us look at what is the ternary operator in Python and how we can use it in our code with some examples. Syntax of Ternary Operator The ternary operator is available from Python version 2.5 onwards, and the syntax is: [value_if_true] if [expression] else [value_if_false]  The simpler way is to represent ternary operator Syntax is as follows:      ifelse Note: The conditionals are an expression, not a statement. It means that you can’t use assignment statements or pass other statements within a conditional expression. Introduction to Python Ternary Operators Let us take a simple example to check if the student’s result is pass or fail based on the marks obtained. Using a traditional approach We are familiar with “ if-else ” conditions lets first write a program that prompts to enter student marks and returns either pass or fail based on the condition specified. marks = input('Enter the marks: ') if int(marks) >= 35: print("The result is Pass") else: print("The result is Fail") Output Enter the marks: 55 The result is Pass Now instead of using a typical if-else condition, let us try using ternary operators. Example of Ternary Operator in Python The ternary operator evaluates the condition first. If the result is true, it returns the value_if_true. Otherwise, it returns value_if_false The ternary operator is equivalent to the if-else condition. If you are from a programming background like C#, Java, etc., the ternary syntax looks like below. if condition: value_if_true else: value_if_true condition ? value_if_true : value_if_false However, In Python, the syntax of the ternary operator is slightly different. The following example demonstrates how we can use the ternary operator in Python. marks = input('Enter the marks: ') print("The result is Pass" if int(marks)>=35 else "The result is Fail") Output Enter the marks: 34 The result is Fail
1
Increased Errors in Auth0
Subscribe to updates History Increased errors in Auth0 Current Status: Resolved | Last updated at July 1, 2021, 21:11 UTC Affected environments: US-1 Preview, US-1 Production We have posted the final update to our detailed root-cause analysis for our service disruption on April 20, 2021. This update includes: – Completion of the remaining six action items https://cdn.auth0.com/blog/Detailed_Root_Cause_Analysis_(RCA)_4-2021.pdf History for this incident April 20, 2021 23:53 UTC April 20, 2021 23:05 UTC April 20, 2021 21:39 UTC April 20, 2021 21:23 UTC April 20, 2021 20:52 UTC April 20, 2021 20:38 UTC April 20, 2021 20:18 UTC April 20, 2021 20:05 UTC April 20, 2021 19:44 UTC April 20, 2021 19:25 UTC April 20, 2021 19:08 UTC April 20, 2021 18:54 UTC April 20, 2021 18:22 UTC April 20, 2021 18:06 UTC April 20, 2021 17:50 UTC April 20, 2021 17:31 UTC April 20, 2021 17:14 UTC April 20, 2021 16:41 UTC April 20, 2021 16:20 UTC April 20, 2021 15:59 UTC April 20, 2021 15:43 UTC Back to history
2
Re-Introducing Status Desktop – Beta v0.1.0. Available on Windows, Mac, Linux
Status Desktop returns as beta v0.1.0 to provide private, secure communication on Mac, Windows, and Linux. Some may have remembered the moment in November 2018 at DevconIV, when Status officially pulled the plug on the core contributor Slack and migrated entirely over to Status Desktop alpha. It was a massive moment for Status and the mission to provide private, secure communication no matter where you are – on the go with your smartphone or while at work at your desk. The conversation in Status Desktop was flowing and the product was improving each day - dogfooding at its finest. However, as many know, building infrastructure and privacy preserving tools from the ground up that adhere to the strict values and principles of the Status community, is challenging to say the least. With that, the team decided to prioritize the Status mobile app and put development of the desktop client on pause. Fast forward roughly one year, v1 of the Status mobile app is live in the App and Playstore, and development of desktop is back underway driven by a dedicated team. Today, Status officially re-introduces the Desktop client as beta v0.1.0 – marking a huge milestone in bringing decentralized messaging no matter where you are. The team, along with community contributors, have been steadily working on the client for the past few months. Developer builds have only been available via the Github repository for the team along with those willing to try out experimental software. With key features implemented and bringing the desktop messenger close to feature parity with the mobile app, it is officially ready for wider testing and can be downloaded for Mac, Windows, and Linux here While the Status Mobile App provides a holistic experience for communication and access to Ethereum with an integrated private messenger, Ethereum wallet, and Web3 DApp browser, the desktop app focuses initially on the messenger. It includes all the key features of the mobile application including private 1:1 chats, private group chats, community public channels, images in 1:1 and group chats, emoji reactions and more. Status Desktop is truly the first desktop messenger built in line with Status Principles. It leverages Waku for peer-to-peer messaging just like the mobile app. Waku, the fork of the Whisper protocol, aims to deliver the removal of centralized rent seeking intermediaries, decentralization of the network and removal of single points of failure, and censorship resistance. Desktop includes access to the Status Sticker Market as well as the ability to register and display stateofus.eth ENS usernames which both require SNT. For this reason, the wallet is available but is hidden from the UI unless toggled on under advanced settings (Profile >> Advanced >> Wallet Tab). Status Desktop has not undergone a formal security audit so wallet features are available at the risk of the user. The Web3 DApp browser is currently removed from the product entirely while the team builds some final features and can then conduct a security audit. Access to DApps is a crucial part of the Status user experience, but only when strong privacy and security guarantees can be made. As always, Status will not cut corners and jeopardize the security of the community. Both the wallet and DApp browser are under active development and a security audit is in the near future. When the browser is enabled, Status will provide a window into the world of Web3 and a communication layer to Ethereum. Current Status users can import their existing accounts and then easily sync their mobile and desktop apps for a seamless experience across devices. Simply import an account with a seed phrase and then head to the Profile Tab, Device settings, and then pair devices. Install Status Desktop and test it out for Mac, Windows, or Linux. **Status Desktop is un-audited, beta software and builds are available for testing. For this reason, upon installation, you will need to drag Status into the applications folder on your desktop and then manually open the app: right click >> Open Current status of builds: Web3 Dapp Browser Install Status Desktop and test it out for Mac, Windows, or Linux. Share
1
System-Level Packaging Tradeoffs
Leading-edge applications such as artificial intelligence, machine learning, automotive, and 5G, all require high bandwidth, higher performance, lower power and lower latency. They also need to do this for the same or less money. The solution may be disaggregating the SoC onto multiple die in a package, bringing memory closer to processing elements and delivering faster turnaround time. But the tradeoffs for making this happen are becoming increasingly complex, regardless of the advanced packaging approaches. In PCB-based systems, there are lots of devices on one board. Over time, as scaling allows for tighter integration and increased density, it is possible to bring everything together onto a single die. But this also is swelling the size of some designs, which actually can exceed reticle size even at the most advanced process nodes. But choosing what to keep on the same die, what components to put on other die, and how to package them all together presents a lot of choices. “2.5D and 3D is making this very complex, and people cannot always understand what’s going on,” said Rita Horner, senior staff, product manager at Synopsys. “How do you actually go about doing that? How do you do the planning? How do you do the exploration phase before knowing what to put in what, and in what configurations? Those are the challenges a lot of people are facing. And then, which tool do you use to optimally do your planning, do your design, do your implementation validation? There are so many different point tools in the market it’s unbelievable. It’s alphabet soup, and people are very confused. That’s the key. How do you actually make this implementation happen?” Fig. 1: Some advanced packaging options. Source: Synopsys Historically, many chipmakers have taken an SoC approach that also included some analog content and memory. This was likely a single, integrated SoC, where all of the IP was on a single die. “Some people still do that because for their business model, that makes the most sense,” said Michael White, product marketing director at Mentor, a Siemens Business. “But today, there are certainly others putting the CPU on one die, having that connected [TSMC] InFO-style, or with a silicon interposer connected up to an HBM memory. There may be other peripheral IP on that same silicon interposer to perform certain functions. Different people use different advanced packaging approaches, depending on where are they going. Are they going into a data center, where they can charge a real premium for that integrated package they’ve created? Or are they going in mobile, where they’re super cost-sensitive and power-sensitive? They might tend to drift more towards the InFO-style package approach.” Alternatively, if the die configuration contains relatively few die, and is relatively symmetric, TSMC InFo may be utilized where they’re connecting everything up through the InFo and don’t have a silicon interposer, with everything sitting on an organic substrate. There are many different configurations, he said. “It’s really driven by how many die do you have? Is the layout of the die or chiplets that you’re trying to connect up relatively few and a rather symmetric configuration? InFo style technology as well as other OSAT approaches are used versus a silicon interposer, if possible, because the silicon interposer is another chunk of silicon that you have to create/manufacture so there’s more cost. Your bill of materials has more cost with that, so you use it if your performance absolutely drives the need for it, or the number of chiplets that you have drive you in that direction. Another approach we’re seeing is folks trying to put some active circuitry in that silicon interposer, such as silicon photonics devices in the silicon interposer. Big picture, packaging technology being used is all over the map, and it’s really a function of the particular application, the market, and the complexity of the number of chiplets or dies, and so on.” System-level problems All of this speaks to the ongoing quest to continue to make gains in the system, which is manifesting in any number of architecture choices, compute engine decisions, in addition to packaging options. Paramount in the decision-making process is cost. “Chiplets are very interesting there. That seems to be the preferred solution now for dealing with the cost,” said Kristof Beets, senior director of technical product management at Imagination Technologies. “We’re seeing a lot of GPUs that have to become scalable by just copying multiple and connecting them together. Ramping up nodes is great in one way, but the cost isn’t so great. The question is really how to effectively create one 7nm or 5nm chip, and then just hook them together with whatever technique is chosen to ideally double the performance. Engineering groups are looking to do this to create multiple performances out of just a single chip investment.” Here, the question as to whether you know exactly what application you will be creating your chip for should be posed first. “This market is fast-moving, so you’re never quite sure that requirements which were set for the application when you started designing the chip will be the same at the end of the design and verification cycle,” said Aleksandar Mijatovic, design engineer at Vtool. “That is the first concern, which has to give you some push towards including more features than you actually need at the time. Many companies will try to look forward and get ready for a change in standards, change in protocols, change in speeds in order not to lose that one year of market which will arrive, when other components in that application get upgraded, and they are just put on the market something, which is based on a one-year-old standard.” Other issues must be paid attention to. “Full design, full verification plus mask making, and manufacturing is quite an expensive process so something you may think as too much logic might be there just because it turned out to be cheaper than to produce two or three flavors of one chip doing full verification, full masks and the whole packaging lines,” Mijatovic said. “Sometimes it’s just cheaper not to use some features than to manufacture a lot of them similar to when AMD came out with dual core processors which were quad cores, just with two cores turned off because it was already set up and nobody cared to pay for the expense of shrinking. A lot comes down to architecture, market research, and bean counting.” When it comes to chiplets, from the perspective of the verification domain, the biggest challenge for pre-silicon verification is that there might be more complexity (i.e., a bigger system), at least potentially, and more interfaces and package-level fabric to verify, Sergio Marchese, technical marketing manager at OneSpin suggested. “On the other hand, if you have a bunch of chiplets fabricated on different technology nodes, those different nodes should not affect pre-silicon verification. One thing that is not clear is: if you figure out that there is something wrong, not with a specific chiplet, but with their integration, what’s the cost for a ‘respin?’” One-off solutions Another aspect to advanced packaging approaches today is that many are unique to a single design, and while one company may be building the most advanced chip, they don’t get enough out of scaling in terms of power and performance. They may get the density, but they don’t get the benefits that they used to out of scaling. They turn to packaging to get extra density with an architectural change. While this is possible for leading-edge chipmakers, what will it take for mainstream chipmakers to access this kind of approach for highly complex and integrated chips, chiplets, and SoCs? This requires a very thorough understanding of the dynamics of how these systems are getting connected together. “Typically, the architect, chip designers, package designers, PCB designers all operate independent of each other, sometimes sequentially, in their own silos,” said Synopsys’ Horner. “But die-level complexities are increasing so much that they no longer can operate independently of each other because of the complexity. If they really want to make multi-die integration go mainstream, be affordable, more reliable, with faster turnaround time to get to the market, there needs to be a more collaborative environment where all these different disciplines, individuals actually can work together from early stages of the design to implementation, to validation, and even to the manufacturing. If there is a new next-generation iteration happening, it would be nice to have a platform to go back to and learn from to be able to further optimize for the next generation without going back to paper and pencil, which a lot of people are using to do their planning and organizing before they start digging in.” However, there is no platform to allow all these different disciplines to collaborate with each other and to learn from each other. “This collaborative environment would allow people to even go back to the drawing board when they realize they must disaggregate the die because it’s getting too large,” Horner said. “And because the die has to be disaggregated, additional I/O must be added. But what type is best? If the I/O is put on one side of the chip versus the other, what’s the impact of the substrate layer design. What that means translates to the package and the board level, where the balls are placed in the package, versus the C4 bumps in the substrate, or the micro bumps in a die.” She suggested that the ideal situation is to have a common unified platform to bring all the information into the simulation environment. “You could bring the information from the DDR process, or from the HBM, or the CMOS technology where the CPUs may be sitting, and then have enough information brought in to extract the parasitics. And then you can use simulation to make sure the wiring that you’re doing, the spacing and the width of traces that you’re using for your interconnect, or the shielding you’re doing, are going to be able to meet the performance requirement. It is no different than past approaches, but the complexity is getting high. In the past when you did a multi die in the package, there were very few traces going between every part. Now, with the HBM you have thousands just for one HBM connection. It’s very complex. That’s why we are seeing silicon interposers enabling the interconnect, because it allows the fine granularity of the width and spaces that are needed for this level of density.” Looking past Moore While there may be more attention being paid to advanced “More Than Moore” approaches, Mentor’s White said there are plenty of engineering teams still following Moore’s Law. They are building an SoC with GPUs and memory on a single die. “We still see that, but we definitely see folks who also are looking at building that assembly with multiple die,” Mentor’s White said. “Today, that might include a CPU sitting next to a GPU connected up to HBM memory, and then some series of other supporting IP chiplets that may be high speed interfaces, or RF, to Bluetooth or something else. We certainly see lots of companies talking about that approach, as well, of having multiple chiplets, including a GPU, and all of this assembly — and then using that as an easier way to then substitute in or out some of those supporting chiplets differently for different markets. In one particular market, you’re going to have RF and whatever. Another market may demand a slightly different set of supporting chiplets on that SoC, maybe more memory, maybe less memory, for whatever that marketplace needs. Or maybe I’m trying to intercept different price points. For the top of the line, I’ve got tons of HBM and a larger processor. Then, for some lower-level technology node, I’ve got a much more modest memory and a smaller GPU because the market pricing would not support that first one.” Other must-have requirements for package design include being able to understand all the different ways you can attach chips to package designs, whether it’s wirebond, flip-chip, stacking or embedding. “You have to have a tool that understands the intricacies of that cross section of the design,” said John Park, product management director for IC packaging and cross-platform solutions at Cadence. Park noted what is often overlooked is a connectivity use model. “This is important because the chip designer may use something, where they take the RTL and netlist it to Verilog. And that’s the connectivity. Board people use schematics for their connectivity. Then, packaging people sit somewhere in the middle, and for a lot of the connectivity, they have the flexibility to assign the I/Os based on better routing on the board level. They need the ability to drive the design with partial schematic, and the flexibility to create their own on-the-fly connectivity to the I/O that are somewhat flexible. They need the ability to work in spreadsheets. It’s not a single source for the connectivity, but it can be, and some people would like it that way. But it’s more important to have a really flexible connectivity model that allows you to drive the schematics, spreadsheets, build connectivity on-the-fly,” he said. Park added that it’s important to have tight integration with mask-level sign-off tools to improve routing, with specific knowledge of metal fill and RDL routing to create higher yielding designs. “The most important aspect is that it be a traditional BGA tool, but with the ability to integrate with mask-level physical verification tools for DRC and LVS. So I can take the layout, point to a rule deck in my verification tool, and any errors are fed back into the layout so I can correct those. That’s an important flow for people who are extending beyond BGA into some of these fan-out wafer level packages,” Park concluded. Fig. 2: Basic chiplet concept. Source: Cadence
4
How to build the next great startup with remote work, with Andreas Klinger
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
How the Internet Works
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
4
How close is nuclear fusion?
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
318
Leak reveals $2T of possibly corrupt US financial activity
Thousands of documents detailing $2 trillion (£1.55tn) of potentially corrupt transactions that were washed through the US financial system have been leaked to an international group of investigative journalists. The leak focuses on more than 2,000 suspicious activity reports (SARs) filed with the US government’s Financial Crimes Enforcement Network (FinCEN). Banks and other financial institutions file SARs when they believe a client is using their services for potential criminal activity. However, the filing of an SAR does not require the bank to cease doing business with the client in question. The documents were provided to BuzzFeed News, which shared them with the International Consortium of Investigative Journalists. The documents are said to suggest major banks provided financial services to high-risk individuals from around the world, in some cases even after they had been placed under sanctions by the US government. According to the ICIJ the documents relate to more than $2tn of transactions dating from between 1999 and 2017. One of those named in the SARs is Paul Manafort, a political strategist who led Donald Trump’s 2016 presidential election campaign for several months. He stepped down from the role after his consultancy work for former Ukrainian president Viktor Yanukovych was exposed, and he was later convicted of fraud and tax evasion. According to the ICIJ, banks began flagging activity linked to Manafort as suspicious beginning in 2012. In 2017 JP Morgan Chase filed a report on wire transfers worth over $300m involving shell companies in Cyprus that had done business with Manafort. The ICIJ said Manafort’s lawyer did not respond to an invitation to comment. A separate report details over $1bn in wire transfers by JP Morgan Chase that the bank later came to suspect were linked to Semion Mogilevich, an alleged Russian organised crime boss who is named on the FBI’s top 10 most wanted list. A JP Morgan Chase spokesperson told the BBC: “We follow all laws and regulations in support of the government’s work to combat financial crimes. We devote thousands of people and hundreds of millions of dollars to this important work.” According to BBC Panorama, the British bank HSBC allowed a group of criminals to transfer millions of dollars from a Ponzi scheme through its accounts, even after it had identified their fraud. HSBC said in a statement: “Starting in 2012, HSBC embarked on a multi-year journey to overhaul its ability to combat financial crime across more than 60 jurisdictions.” It added: “HSBC is a much safer institution than it was in 2012.” In a statement released earlier this month FinCEN condemned the disclosure of the leaked documents and said it had referred the matter to the US Department of Justice. “The Financial Crimes Enforcement Network is aware that various media outlets intend to publish a series of articles based on unlawfully disclosed suspicious activity reports (SARs), as well as other sensitive government documents, from several years ago,” it stated. “As FinCEN has stated previously, the unauthorised disclosure of SARs is a crime that can impact the national security of the United States, compromise law enforcement investigations, and threaten the safety and security of the institutions and individuals who file such reports.”
1
The 1968 Sunday Times Golden Globe Race
Robin Knox-Johnston finishing his circumnavigation of the world in Suhaili as the winner of the Golden Globe Race The b was a non-stop, single-handed, round-the-world yacht race, held in 1968–1969, and was the first round-the-world yacht race. The race was controversial due to the failure of most competitors to finish the race and because of the apparent suicide of one entrant; however, it ultimately led to the founding of the BOC Challenge and Vendée Globe round-the-world races, both of which continue to be successful and popular. The race was sponsored by the British Sunday Times newspaper and was designed to capitalise on a number of individual round-the-world voyages which were already being planned by various sailors; for this reason, there were no qualification requirements, and competitors were offered the opportunity to join and permitted to start at any time between 1 June and 31 October 1968. The Golden Globe trophy was offered to the first person to complete an unassisted, non-stop single-handed circumnavigation of the world via the great capes, and a separate £5,000 prize was offered for the fastest single-handed circumnavigation. Nine sailors started the race; four retired before leaving the Atlantic Ocean. Of the five remaining, Chay Blyth, who had set off with absolutely no sailing experience, sailed past the Cape of Good Hope before retiring; Nigel Tetley sank with 1,100 nautical miles (2,000 km) to go while leading; Donald Crowhurst, who, in desperation, attempted to fake a round-the-world voyage to avoid financial ruin, began to show signs of mental illness, and then committed suicide; and Bernard Moitessier, who rejected the philosophy behind a commercialised competition, abandoned the race while in a strong position to win and kept sailing non-stop until he reached Tahiti after circling the globe one and a half times. Robin Knox-Johnston was the only entrant to complete the race, becoming the first man to sail single-handed and non-stop around the world. He was awarded both prizes, and later donated the £5,000 to a fund supporting Crowhurst's family. Long-distance single-handed sailing has its beginnings in the nineteenth century, when a number of sailors made notable single-handed crossings of the Atlantic. The first single-handed circumnavigation of the world was made by Joshua Slocum, between 1895 and 1898, [1] and many sailors have since followed in his wake, completing leisurely circumnavigations with numerous stopovers. However, the first person to tackle a single-handed circumnavigation as a speed challenge[ p ] was Francis Chichester, who, in 1960, had won the inaugural Observer Single-handed Trans-Atlantic Race (OSTAR). [2] In 1966, Chichester set out to sail around the world by the clipper route, starting and finishing in England with a stop in Sydney, in an attempt to beat the speed records of the clipper ships in a small boat. His voyage was a great success, as he set an impressive round-the-world time of nine months and one day – with 226 days of sailing time – and, soon after his return to England on 28 May 1967, was knighted by Queen Elizabeth II. [3] Even before his return, however, a number of other sailors had turned their attention to the next logical challenge – a non-stop single-handed circumnavigation of the world.[ p ] In March 1967, a 28-year-old British merchant marine officer, Robin Knox-Johnston, realised that a non-stop solo circumnavigation was "about all there's left to do now". Knox-Johnston had a 32 foot (9.8 m) wooden ketch, Suhaili , which he and some friends had built in India to the William Atkin Eric design; two of the friends had then sailed the boat to South Africa, and in 1966 Knox-Johnston had single-handedly sailed her the remaining 10,000 nautical miles (11,500 mi; 18,500 km) to London. [4] Knox-Johnston was determined that the first person to make a single-handed non-stop circumnavigation should be British, and he decided that he would attempt to achieve this feat. To fund his preparations he went looking for sponsorship from Chichester's sponsor, the British Sunday Times . The Sunday Times was by this time interested in being associated with a successful non-stop voyage but decided that, of all the people rumoured to be preparing for a voyage, Knox-Johnston and his small wooden ketch were the least likely to succeed. Knox-Johnston finally arranged sponsorship from the Sunday Mirror . [5] [6] Several other sailors were interested. Bill King, a former Royal Navy submarine commander, built a 42 foot (12.8 m) junk-rigged schooner, Galway Blazer II, designed for heavy conditions. He was able to secure sponsorship from the Express newspapers. John Ridgway and Chay Blyth, a British Army captain and sergeant, had rowed a 20 foot (6.1 m) boat across the Atlantic Ocean in 1966. They independently decided to attempt the non-stop sail, but despite their rowing achievement were hampered by a lack of sailing experience. They both made arrangements to get boats, but ended up with entirely unsuitable vessels, 30 foot (9.1 m) boats designed for cruising protected waters and too lightly built for Southern Ocean conditions. Ridgway managed to secure sponsorship from The People newspaper. [7] One of the most serious sailors considering a non-stop circumnavigation in late 1967 was the French sailor and author Bernard Moitessier. Moitessier had a custom-built 39 foot (11.9 m) steel ketch, i, named after Slocum, in which he and his wife Françoise had sailed from France to Tahiti. They had then sailed her home again by way of Cape Horn, simply because they wanted to go home quickly to see their children. He had already achieved some recognition based on two successful books which he had written on his sailing experiences. However, he was disenchanted with the material aspect of his fame – he believed that by writing his books for quick commercial success he had sold out what was for him an almost spiritual experience. He hit upon the idea of a non-stop circumnavigation as a new challenge, which would be the basis for a new and better book. [8] [9] By January 1968, word of all these competing plans was spreading. The i, which had profited to an unexpected extent from its sponsorship of Chichester, wanted to get involved with the first non-stop circumnavigation, but had the problem of selecting the sailor most likely to succeed. King and Ridgway, two likely candidates, already had sponsorship, and there were several other strong candidates preparing. "Tahiti" Bill Howell, an Australian cruising sailor, had made a good performance in the 1964 i, Moitessier was also considered[ p ] a strong contender, and there may have been other potential circumnavigators already making preparations.[ p ] The route of the Golden Globe Race. The Sunday Times did not want to sponsor someone for the first non-stop solo circumnavigation only to have them beaten by another sailor, so the paper hit upon the idea of a sponsored race, which would cover all the sailors setting off that year. To circumvent the possibility of a non-entrant completing his voyage first and scooping the story, they made entry automatic: anyone sailing single-handed around the world that year would be considered in the race.[ p ] This still left them with a dilemma in terms of the prize. A race for the fastest time around the world was a logical subject for a prize, but there would obviously be considerable interest in the first person to complete a non-stop circumnavigation, and there was no possibility of persuading the possible candidates to wait for a combined start. The Sunday Times therefore decided to award two prizes: the Golden Globe trophy for the first person to sail single-handed, non-stop around the world; and a £5,000 prize (equivalent to £92,000 in 2021) for the fastest time. [10] This automatic entry provision had the drawback that the race organisers could not vet entrants for their ability to take on this challenge safely. This was in contrast to the i, for example, which in the same year required entrants to complete a solo 500-nautical mile (930 km) qualifying passage. [11] The one concession to safety was the requirement that all competitors must start between 1 June and 31 October, in order to pass through the Southern Ocean in summer. [12] To make the speed record meaningful, competitors had to start from the British Isles. However Moitessier, the most likely person to make a successful circumnavigation, was preparing to leave from Toulon, in France. When the i went to invite him to join the race, he was horrified, seeing the commercialisation of his voyage as a violation of the spiritual ideal which had inspired it. A few days later, Moitessier relented, thinking that he would join the race and that if he won, he would take the prizes and leave again without a word of thanks. In typical style, he refused the offer of a free radio to make progress reports, saying that this intrusion of the outside world would taint his voyage; he did, however, take a camera, agreeing to drop off packages of film if he got the chance. [13] The race was announced on 17 March 1968, by which time King, Ridgway, Howell (who later dropped out), Knox-Johnston, and Moitessier were registered as competitors. Chichester, despite expressing strong misgivings about the preparedness of some of the interested parties, was to chair the panel of judges. [10] Four days later, British electronics engineer Donald Crowhurst announced his intention to take part. Crowhurst was the manufacturer of a modestly successful radio navigation aid for sailors, who impressed many people with his apparent knowledge of sailing. With his electronics business failing, he saw a successful adventure, and the attendant publicity, as the solution to his financial troubles – essentially the mirror opposite of Moitessier, who saw publicity and financial rewards as inimical to his adventure. [14] Crowhurst planned to sail in a trimaran. These boats were starting to gain a reputation, still very much unproven, for speed, along with a darker reputation for unseaworthiness; they were known to be very stable under normal conditions, but extremely difficult to right if knocked over, for example by a rogue wave. Crowhurst planned to tackle the deficiencies of the trimaran with a revolutionary self-righting system, based on an automatically inflated air bag at the masthead. He would prove the system on his voyage, then go into business manufacturing it, thus making trimarans into safe boats for cruisers. [15] By June, Crowhurst had secured some financial backing, essentially by mortgaging the boat, and later his family home. Crowhurst's boat, however, had not yet been built; despite the lateness of his entry, he pressed ahead with the idea of a custom boat, which started construction in late June. Crowhurst's belief was that a trimaran would give him a good chance of the prize for the fastest circumnavigation, and with the help of a wildly optimistic table of probable performances, he even predicted that he would be first to finish – despite a planned departure on 1 October. [16] The start (1 June to 28 July) Given the design of the race, there was no organised start; the competitors set off whenever they were ready, over a period of several months. On 1 June 1968, the first allowable day, John Ridgway sailed from Inishmore, Ireland, in his weekend cruiser i. Just a week later, on 8 June, Chay Blyth followed suit – despite having absolutely no sailing experience. On the day he set sail, he had friends rig the boat i for him and then sail in front of him in another boat to show him the correct manoeuvres. [17] Knox-Johnston got underway from Falmouth soon after, on 14 June. He was undisturbed by the fact that it was a Friday, contrary to the common sailors' superstition that it is bad luck to begin a voyage on a Friday. Suhaili, crammed with tinned food, was low in the water and sluggish, but the much more seaworthy boat soon started gaining on Ridgway and Blyth. [18] It soon became clear to Ridgway that his boat was not up to a serious voyage, and he was also becoming affected by loneliness. On 17 June, at Madeira, he made an arranged rendezvous with a friend to drop off his photos and logs, and received some mail in exchange. While reading a recent issue of the i that he had just received, he discovered that the rules against assistance prohibited receiving mail – including the newspaper in which he was reading this – and so he was technically disqualified. While he dismissed this as overly petty, he continued the voyage in bad spirits. The boat continued to deteriorate, and he finally decided that it would not be able to handle the heavy conditions of the Southern Ocean. On 21 July he put into Recife, Brazil, and retired from the race. [19] Even with the race underway, other competitors continued to declare their intention to join. On 30 June, Royal Navy officer Nigel Tetley announced that he would race in the trimaran he and his wife lived aboard. He obtained sponsorship from Music for Pleasure, a British budget record label, and started preparing his boat, Victress, in Plymouth, where Moitessier, King, and Frenchman Loïck Fougeron were also getting ready. Fougeron was a friend of Moitessier, who managed a motorcycle company in Casablanca, and planned to race on Captain Browne, a 30 foot (9.1 m) steel gaff cutter. Crowhurst, meanwhile, was far from ready – assembly of the three hulls of his trimaran only began on 28 July at a boatyard in Norfolk. [20] [21] [22] Attrition begins (29 July to 31 October) Cape Town and the Cape Peninsula, with the Cape of Good Hope on the bottom right Blyth and Knox-Johnston were well down the Atlantic by this time. Knox-Johnston, the experienced seaman, was enjoying himself, but i had problems with leaking seams near the keel. However, Knox-Johnston had managed a good repair by diving and caulking the seams underwater. [23] Blyth was not far ahead, and although leading the race, he was having far greater problems with his boat, which was suffering in the hard conditions. He had also discovered that the fuel for his generator had been contaminated, which effectively put his radio out of action. On 15 August, Blyth went in to Tristan da Cunha to pass a message to his wife, and spoke to crew from an anchored cargo ship, Gillian Gaggins. On being invited aboard by her captain, a fellow Scot, Blyth found the offer impossible to refuse and went aboard, while the ship's engineers fixed his generator and replenished his fuel supply.[ p ] By this time he had already shifted his focus from the race to a more personal quest to discover his own limits; and so, despite his technical disqualification for receiving assistance, he continued sailing towards Cape Town. His boat continued to deteriorate, however, and on 13 September he put into East London. Having successfully sailed the length of the Atlantic and rounded Cape Agulhas in an unsuitable boat, he decided that he would take on the challenge of the sea again, but in a better boat and on his own terms. [24] Despite the retirements, other racers were still getting started. On Thursday, 22 August, Moitessier and Fougeron set off, with King following on Saturday (none of them wanted to leave on a Friday). [25] With Joshua lightened for a race, Moitessier set a fast pace – more than twice as fast as Knox-Johnston over the same part of the course. Tetley sailed on 16 September, [26] and on 23 September, Crowhurst's boat, Teignmouth Electron, was finally launched in Norfolk. Under severe time pressure, Crowhurst planned to sail to Teignmouth, his planned departure point, in three days; but although the boat performed well downwind, the struggle against headwinds in the English Channel showed severe deficiencies in the boat's upwind performance, and the trip to Teignmouth took 13 days. [27] Meanwhile, Moitessier was making excellent progress. On 29 September he passed Trindade in the south Atlantic, and on 20 October he reached Cape Town, where he managed to leave word of his progress. He sailed on east into the Southern Ocean, where he continued to make good speed, covering 188 nautical miles (216 mi; 348 km) on 28 October. [28] Others were not so comfortable with the ocean conditions. On 30 October, Fougeron passed Tristan da Cunha, with King a few hundred nautical miles ahead. The next day – Halloween – they both found themselves in a severe storm. Fougeron hove-to, but still suffered a severe knockdown. King, who allowed his boat to tend to herself (a recognised procedure known as lying ahull ), had a much worse experience; his boat was rolled and lost its foremast. Both men decided to retire from the race. [29] The last starters (31 October to 23 December) Four of the starters had decided to retire at this point, at which time Moitessier was 1,100 nautical miles (1,300 mi; 2,000 km) east of Cape Town, Knox-Johnston was 4,000 nautical miles (4,600 mi; 7,400 km) ahead in the middle of the Great Australian Bight, and Tetley was just nearing Trindade. [30] [31] [32] However, 31 October was also the last allowable day for racers to start, and was the day that the last two competitors, Donald Crowhurst and Alex Carozzo, got under way. Carozzo, a highly regarded Italian sailor, had competed in (but not finished) that year's OSTAR. Considering himself unready for sea, he "sailed" on 31 October, to comply with the race's mandatory start date, but went straight to a mooring to continue preparing his boat without outside assistance. Crowhurst was also far from ready – his boat, barely finished, was a chaos of unstowed supplies, and his self-righting system was unbuilt. He left anyway, and started slowly making his way against the prevailing winds of the English Channel. [33] The approximate positions of the racers on 31 October 1968, the last day on which racers could start By mid-November Crowhurst was already having problems with his boat. Hastily built, the boat was already showing signs of being unprepared, and in the rush to depart, Crowhurst had left behind crucial repair materials. On 15 November, he made a careful appraisal of his outstanding problems and of the risks he would face in the Southern Ocean; he was also acutely aware of the financial problems awaiting him at home. Despite his analysis that i was not up to the severe conditions which she would face in the Roaring Forties, he pressed on. [34] Carozzo retired on 14 November, as he had started vomiting blood due to a peptic ulcer, and put into Porto, Portugal, for medical attention. [35] [36] Two more retirements were reported in rapid succession, as King made Cape Town on 22 November, and Fougeron stopped in Saint Helena on 27 November. [37] This left four boats in the race at the beginning of December: Knox-Johnston's Suhaili, battling frustrating and unexpected headwinds in the south Pacific Ocean, [38] Moitessier's Joshua, closing on Tasmania, [39] Tetley's Victress, just passing the Cape of Good Hope, [40] and Crowhurst's Teignmouth Electron, still in the north Atlantic. Tetley was just entering the Roaring Forties, and encountering strong winds. He experimented with self-steering systems based on various combinations of headsails, but had to deal with some frustrating headwinds. On 21 December he encountered a calm and took the opportunity to clean the hull somewhat; while doing so, he saw a 7 foot (2.1 m) shark prowling around the boat. He later caught it, using a shark hook baited with a tin of bully beef (corned beef), and hoisted it on board for a photo. His log is full of sail changes and other such sailing technicalities and gives little impression of how he was coping with the voyage emotionally; still, describing a heavy low on 15 December he hints at his feelings, wondering "why the hell I was on this voyage anyway". [41] Knox-Johnston was having problems, as i was showing the strains of the long and hard voyage. On 3 November, his self-steering gear had failed for the last time, as he had used up all his spares. He was also still having leak problems, and his rudder was loose. Still, he felt that the boat was fundamentally sound, so he braced the rudder as well as he could, and started learning to balance the boat in order to sail a constant course on her own. On 7 November, he dropped mail off in Melbourne, and on 19 November he made an arranged meeting off the Southern Coast of New Zealand with a i journalist from Otago, New Zealand. [42] [ p ] Crowhurst's false voyage (6 to 23 December) On 10 December, Crowhurst reported that he had had some fast sailing at last, including a day's run on 8 December of 243 nautical miles (280 mi; 450 km), a new 24-hour record. Francis Chichester was sceptical of Crowhurst's sudden change in performance, and with good reason – on 6 December, Crowhurst had started creating a faked record of his voyage, showing his position advancing much faster than it actually was. The creation of this fake log was an incredibly intricate process, involving working celestial navigation in reverse. [43] The motivation for this initial deception was most likely to allow him to claim an attention-getting record prior to entering the doldrums. However, from that point on, he started to keep two logs – his actual navigation log, and a second log in which he could enter a faked description of a round-the-world voyage. This would have been an immensely difficult task, involving the need to make up convincing descriptions of weather and sailing conditions in a different part of the world, as well as complex reverse navigation. He tried to keep his options open as long as possible, mainly by giving only extremely vague position reports; but on 17 December he sent a deliberately false message indicating that he was over the Equator, which he was not. From this point his radio reports – while remaining ambiguous – indicated steadily more impressive progress around the world; but he never left the Atlantic, and it seems that after December the mounting problems with his boat had caused him to give up on ever doing so. [44] Christmas at sea (24 to 25 December) Christmas Day 1968 was a strange day for the four racers, who were very far from friends and family. Crowhurst made a radio call to his wife on Christmas Eve, during which he was pressed for a precise position, but refused to give one. Instead, he told her he was "off Cape Town", a position far in advance of his plotted fake position, and even farther from his actual position, 20 nautical miles (37 km) off the easternmost point in Brazil, just 7 degrees (480 nautical miles (550 mi; 890 km)) south of the equator. [45] Like Crowhurst, Tetley was depressed. He had a lavish Christmas dinner of roast pheasant, but was suffering badly from loneliness. [46] Knox-Johnston, thoroughly at home on the sea, treated himself to a generous dose of whisky and held a rousing solo carol service, then drank a toast to the Queen at 3pm. He managed to pick up some radio stations from the U.S., and heard for the first time about the Apollo 8 astronauts, who had just made the first orbit of the Moon. [47] Moitessier, meanwhile, was sunbathing in a flat calm, deep in the roaring forties south-west of New Zealand. [48] Rounding the Horn (26 December to 18 March) The approximate positions of the racers on 19 January 1969 By January, concern was growing for Knox-Johnston. He was having problems with his radio transmitter and nothing had been heard since he had passed south of New Zealand. [49] He was actually making good progress, rounding Cape Horn on 17 January 1969. Elated by this successful climax to his voyage, he briefly considered continuing east, to sail around the Southern Ocean a second time, but soon gave up the idea and turned north for home. [50] Crowhurst's deliberately vague position reporting was also causing consternation for the press, who were desperate for hard facts. On 19 January, he finally yielded to the pressure and stated himself to be 100 nautical miles (120 mi; 190 km) south-east of Gough Island in the south Atlantic. He also reported that due to generator problems he was shutting off his radio for some time. His position was misunderstood on the receiving end to be 100 nautical miles (190 km) south-east of the Cape of Good Hope; the high speed this erroneous position implied fuelled newspaper speculation in the following radio silence, and his position was optimistically reported as rapidly advancing around the globe. Crowhurst's actual position, meanwhile, was off Brazil, where he was making slow progress south, and carefully monitoring weather reports from around the world to include in his fake log. He was also becoming increasingly concerned about i, which was starting to come apart, mainly due to slapdash construction. [51] Moitessier also had not been heard from since New Zealand, but he was still making good progress and coping easily with the conditions of the "furious fifties". He was carrying letters from old Cape Horn sailors describing conditions in the Southern Ocean, and he frequently consulted these to get a feel for chances of encountering ice. He reached the Horn on 6 February, but when he started to contemplate the voyage back to Plymouth he realised that he was becoming increasingly disenchanted with the race concept. [52] Cape Horn from the South. As he sailed past the Falkland Islands [53] he was sighted, and this first news of him since Tasmania caused considerable excitement. It was predicted that he would arrive home on 24 April as the winner (in fact, Knox-Johnston finished on 22 April). A huge reception was planned in Britain, from where he would be escorted to France by a fleet of French warships for an even more grand reception. There was even said to be a Légion d'honneur waiting for him there. [54] Moitessier had a very good idea of this, but throughout his voyage he had been developing an increasing disgust with the excesses of the modern world; the planned celebrations seemed to him to be yet another example of brash materialism. After much debate with himself, and many thoughts of those waiting for him in England, he decided to continue sailing – past the Cape of Good Hope, and across the Indian Ocean for a second time, into the Pacific. [55] Unaware of this, the newspapers continued to publish "assumed" positions progressing steadily up the Atlantic, until, on 18 March, Moitessier fired a slingshot message in a can onto a ship near the shore of Cape Town, announcing his new plans to a stunned world: My intention is to continue the voyage, still nonstop, toward the Pacific Islands, where there is plenty of sun and more peace than in Europe. Please do not think I am trying to break a record. 'Record' is a very stupid word at sea. I am continuing nonstop because I am happy at sea, and perhaps because I want to save my soul. [56] On the same day, Tetley rounded Cape Horn, becoming the first to accomplish the feat in a multihull sailboat. Badly battered by his Southern Ocean voyage, he turned north with considerable relief. [54] [57] i was also battered and Crowhurst badly wanted to make repairs, but without the spares that had been left behind he needed new supplies. After some planning, on 8 March he put into the tiny settlement of Río Salado, in Argentina, just south of the Río de la Plata. Although the village turned out to be the home of a small coastguard station, and his presence was logged, he got away with his supplies and without publicity. He started heading south again, intending to get some film and experience of Southern Ocean conditions to bolster his false log. [58] The concern for Knox-Johnston turned to alarm in March, with no news of him since New Zealand; aircraft taking part in a NATO exercise in the North Atlantic mounted a search operation in the region of the Azores. However, on 6 April he finally managed to make contact with a British tanker using his signal lamp, which reported the news of his position, 1,200 nautical miles (1,400 mi; 2,200 km) from home. This created a sensation in Britain, with Knox-Johnston now clearly set to win the Golden Globe trophy, and Tetley predicted to win the £5,000 prize for the fastest time. [59] [60] The approximate positions of the racers on 10 April 1969 Crowhurst re-opened radio contact on 10 April, reporting himself to be "heading" towards the Diego Ramirez Islands, near Cape Horn. This news caused another sensation, as with his projected arrival in the UK at the start of July he now seemed to be a contender for the fastest time, and (very optimistically) even for a close finish with Tetley. Once his projected false position approached his actual position, he started heading north at speed. [61] Tetley, informed that he might be robbed of the fastest-time prize, started pushing harder, despite that his boat was having significant problems – he made major repairs at sea in an attempt to stop the port hull of his trimaran falling off, and kept racing. On 22 April, he crossed his outbound track, one definition of a circumnavigation. [62] The finish (22 April to 1 July) On the same day, 22 April, Knox-Johnston completed his voyage where it had started, in Falmouth. This made him the winner of the Golden Globe trophy, and the first person to sail single-handed and non-stop around the world, which he had done in 312 days. [63] This left Tetley and Crowhurst apparently fighting for the £5,000 prize for fastest time. However, Tetley knew that he was pushing his boat too hard. On 20 May he ran into a storm near the Azores and began to worry about the boat's severely weakened state. Hoping that the storm would soon blow over, he lowered all sail and went to sleep with the boat lying a-hull. In the early hours of the next day he was awoken by the sounds of tearing wood. Fearing that the bow of the port hull might have broken off, he went on deck to cut it loose, only to discover that in breaking away it had made a large hole in the main hull, from which i was now taking on water too rapidly to stop. He sent a Mayday, and luckily got an almost immediate reply. He abandoned ship just before i finally sank and was rescued from his liferaft that evening, having come to within 1,100 nautical miles (1,300 mi; 2,000 km) of finishing what would have been the most significant voyage ever made in a multi-hulled boat. [64] Crowhurst was left as the only person in the race, and – given his high reported speeds – virtually guaranteed the £5,000 prize. This would, however, also guarantee intense scrutiny of himself, his stories, and his logs by genuine Cape Horn veterans such as the sceptical Chichester. Although he had put great effort into his fabricated log, such a deception would in practice be extremely difficult to carry off, particularly for someone who did not have actual experience of the Southern Ocean; something of which he must have been aware at heart. Although he had been sailing fast – at one point making over 200 nautical miles (230 mi; 370 km) in a day – as soon as he learned of Tetley's sinking, he slowed down to a wandering crawl. [65] Crowhurst's main radio failed at the beginning of June, shortly after he had learned that he was the sole remaining competitor. Plunged into unwilling solitude, he spent the following weeks attempting to repair the radio, and on 22 June was finally able to transmit and receive in morse code. The following days were spent exchanging cables with his agent and the press, during which he was bombarded with news of syndication rights, a welcoming fleet of boats and helicopters, and a rapturous welcome by the British people. It became clear that he could not now avoid the spotlight.[ p ] Unable to see a way out of his predicament, he plunged into abstract philosophy, attempting to find an escape in metaphysics, and on 24 June he started writing a long essay to express his ideas. Inspired (in a misguided way) by the work of Einstein, whose book Relativity: The Special and General Theory he had aboard, the theme of Crowhurst's writing was that a sufficiently intelligent mind can overcome the constraints of the real world. Over the following eight days, he wrote 25,000 words of increasingly tortured prose, drifting farther and farther from reality, as Teignmouth Electron continued sailing slowly north, largely untended. Finally, on 1 July, he concluded his writing with a garbled suicide note and, it is assumed, jumped overboard. [66] Moitessier, meanwhile, had concluded his own personal voyage more happily. He had circumnavigated the world and sailed almost two-thirds of the way round a second time, all non-stop and mostly in the roaring forties. Despite heavy weather and a couple of severe knockdowns, he contemplated rounding the Horn again. However, he decided that he and Joshua had had enough and sailed to Tahiti, where he and his wife had set out for Alicante. He thus completed his second personal circumnavigation of the world (including the previous voyage with his wife) on 21 June 1969. He started work on his book. [67] Knox-Johnston, as the only finisher, was awarded both the Golden Globe trophy and the £5,000 prize for fastest time. He continued to sail and circumnavigated three more times. He was awarded a CBE in 1969 and was knighted in 1995. [68] Joshua, restored, at the Maritime Museum at La Rochelle It is impossible to say that Moitessier would have won if he had completed the race, as he would have been sailing in different weather conditions than Knox-Johnston did, but based on his time from the start to Cape Horn being about 77% of that of Knox-Johnston, it would have been extremely close. However Moitessier is on record as stating that he would not have won. [69] [ p ] His book, The Long Way, tells the story of his voyage as a spiritual journey as much as a sailing adventure and is still regarded as a classic of sailing literature. [70] Joshua was beached, along with many other yachts, by a storm at Cabo San Lucas in December 1982; with a new boat, Tamata, Moitessier sailed back to Tahiti from the San Francisco Bay. He died in 1994. [71] When i was discovered drifting and abandoned in the Atlantic on 10 July, a fund was started for Crowhurst's wife and children; Knox-Johnston donated his £5,000 prize to the fund, and more money was added by press and sponsors. [72] The news of his deception, mental breakdown, and suicide, as chronicled in his surviving logbooks, was made public a few weeks later, causing a sensation. Nicholas Tomalin and Ron Hall, two of the journalists connected with the race, wrote a 1970 book on Crowhurst's voyage, The Strange Last Voyage of Donald Crowhurst, described by Hammond Innes in its Sunday Times review as "fascinating, uncomfortable reading" and a "meticulous investigation" of Crowhurst's downfall. [73] Tetley found it impossible to adapt to his old way of life after his adventure. He was awarded a consolation prize of £1,000, with which he decided to build a new trimaran for a round-the-world speed record attempt. His 60 foot (18 m) boat i was built in 1971, but his search for sponsorship to pay for fitting-out met with consistent rejection. His book, i, sold poorly.[ p ] Although he outwardly seemed to be coping, the repeated failures must have taken their toll. [74] In February 1972, he went missing from his home in Dover. His body was found in nearby woods hanging from a tree three days later. His death was originally believed to be a suicide. At the inquest, it was revealed that the body had been discovered wearing lingerie and the hands were bound. The attending pathologist suggested the likelihood of masochistic sexual activity. Finding no evidence to suggest that Tetley had killed himself, the coroner recorded an open verdict. Tetley was cremated; Knox-Johnson and Blyth were among the mourners in attendance. [69] [ p ] Blyth devoted his life to the sea and to introducing others to its challenge. In 1970–1971 he sailed a sponsored boat, i, single-handedly around the world "the wrong way", against the prevailing winds. He subsequently took part in the Whitbread Round the World Yacht Race and founded the Global Challenge race, which allows amateurs to race around the world. His old rowing partner, John Ridgway, followed a similar course; he started an adventure school in Scotland, and circumnavigated the world twice under sail: once in the Whitbread Round the World Yacht Race, and once with his wife. King finally completed a circumnavigation in Galway Blazer II in 1973. [75] i was sailed for some years more, including a trip to Greenland, and spent some years on display at the National Maritime Museum at Greenwich. However, her planking began to shrink because of the dry conditions and, unwilling to see her deteriorate, Knox-Johnston removed her from the museum and had her refitted in 2002. She was returned to the water and is now based at the National Maritime Museum Cornwall.[ p ] i was sold to a tour operator in Jamaica and eventually ended up damaged and abandoned on Cayman Brac, where she lies to this day. [76] After being driven ashore during a storm at Cabo San Lucas, the restored i was acquired by the maritime museum in La Rochelle, France, where it serves as part of a cruising school. [76] Given the failure of most starters and the tragic outcome of Crowhurst's voyage, considerable controversy was raised over the race and its organisation. No follow-up race was held for some time. However, in 1982 the i race was organised; this single-handed round-the-world race with stops was inspired by the Golden Globe and has been held every four years since. In 1989, Philippe Jeantot founded the Vendée Globe race, a non-stop, single-handed, round-the-world race. Essentially the successor to the Golden Globe, this race is also held every four years and has attracted public following for the sport.[ p ] Nine competitors participated in the race. Most of these had at least some prior sailing experience, although only Carozzo had competed in a major ocean race prior to the Golden Globe Race. The following table lists the entrants in order of starting, together with their prior sailing experience, and achievements in the race: For the 50th anniversary of the first race, there was another Golden Globe Race in 2018. Entrants were limited to sailing similar yachts and equipment to what was available to Sir Robin in the original race. This race started from Les Sables-d'Olonne on 1 July 2018. The prize purse has been confirmed as £75,000, with all sailors that finish before 15:25 on 22 April 2019 winning their entry fee back. [77] For the 54th anniversary of the first race, there is to be another Golden Globe Race in 2022. Entrants were limited to sailing similar yachts and equipment to what was available to Sir Robin in the original race, but in two classes. This race started from Les Sables-d'Olonne on 4 September 2022 and won by South African Kirsten Neuschäfer after an official time of 233 days, 20 hours, 43 minutes and 47 seconds at sea. [78] ^ ^ ^ ^ Knox-Johnston 1969, pp. 1–12. ^ Knox-Johnston 1969, p. 17. ^ Nichols 2001, pp. 32–33. ^ Nichols 2001, pp. 12–28. ^ Tomalin & Hall 2003, pp. 24–25. ^ Nichols 2001, pp. 19–26. ^ a b Tomalin & Hall 2003, pp. 29–30. ^ Nichols 2001, p. 17. ^ Tomalin & Hall 2003, p. 30. ^ Moitessier 1995, p. 5. ^ Tomalin & Hall 2003, pp. 19–26. ^ Tomalin & Hall 2003, pp. 33–35, 39–40. ^ Tomalin & Hall 2003, pp. 35–38. ^ Nichols 2001, pp. 45–50. ^ Nichols 2001, pp. 55–56. ^ Nichols 2001, pp. 66, 85–87. ^ Tetley 1970, pp. 15–17. ^ Nichols 2001, pp. 56, 63–64. ^ Tomalin & Hall 2003, p. 39. ^ Knox-Johnston 1969, pp. 42–44. ^ Nichols 2001, pp. 92–101. ^ Moitessier 1995, p. 3. ^ Tetley 1970, pp. 23–24. ^ Tomalin & Hall 2003, pp. 50–56. ^ Moitessier 1995, pp. 19–29, 36–45, 56. ^ Nichols 2001, pp. 142, 149–151. ^ Moitessier 1995, p. 56. ^ Knox-Johnston 1969, pp. 93–94. ^ Tetley 1970, pp. 60–61. ^ Tomalin & Hall 2003, pp. 75–81. ^ Tomalin & Hall 2003, pp. 79–97. ^ Nichols 2001, p. 181. ^ ^ Nichols 2001, pp. 155–156. ^ Knox-Johnston 1969, pp. 125–128. ^ Moitessier 1995, pp. 83–87. ^ Tetley 1970, p. 79. ^ Tetley 1970, pp. 79–91. ^ Knox-Johnston 1969, pp. 97, 101–102, 117–123. ^ Tomalin & Hall 2003, pp. 98–116. ^ Tomalin & Hall 2003, pp. 117–126. ^ Tomalin & Hall 2003, pp. 126, 133. ^ Tetley 1970, p. 93. ^ Knox-Johnston 1969, pp. 140–142. ^ Moitessier 1995, p. 93. ^ Nichols 2001, p. 213. ^ Knox-Johnston 1969, pp. 160–161, 175. ^ Tomalin & Hall 2003, pp. 143–147. ^ Moitessier 1995, pp. 109–111, 140–142. ^ Moitessier 1995, p. 146. ^ a b Nichols 2001, pp. 241–242. ^ Moitessier 1995, pp. 148, 158–165. ^ Nichols 2001, pp. 242–244. ^ , pages 124—131. Trimran Solo ^ Tomalin & Hall 2003, pp. 151–162. ^ Knox-Johnston 1969, pp. 205–206. ^ Nichols 2001, pp. 248, 251. ^ Tomalin & Hall 2003, pp. 170–172, 185–186. ^ Tetley 1970, pp. 136–141. ^ Nichols 2001, p. 267. ^ Tetley 1970, pp. 149–160. ^ Tomalin & Hall 2003, pp. 186, 190–191. ^ Nichols 2001, pp. 195–251. ^ Moitessier 1995, pp. 172–175. ^ ^ a b Eakin 2009. ^ Originally retrieved 6 March 2006. ^ Nichols 2001, pp. 293–294. ^ Nichols 2001, pp. 283–285. ^ ^ Nichols 2001, pp. 275–282. ^ Nichols 2001, pp. 294–295. ^ a b Nichols 2001, pp. 295–296. ^ ^
2
Purple Heart Stockpile: The WWII Medals Still Being Issued
By the end of the war, over one million wounded or dead US servicemen from the war received the Purple Hearts. During WWII, the United States Military made over 1.5 million Purple Heart medals in anticipation of a colossal casualty rate from the planned invasion of Japan during Operation Downfall. However, this invasion did not ensue because, just before the launch of the assault, Japan surrendered. The US military, however, began awarding the Purple Hearts to military men as the war came to a close. But so many Purple Hearts had been produced that the supply was not exhausted at that time. “Time and combat will continue to erode the WWII stock, but it’s anyone’s guess how long it will be before the last Purple Heart for the invasion of Japan is pinned on a young soldier’s chest,” historian D.M Gianfranco is quoted as saying in an American newspaper. By May 1945, Nazi Germany had thrown in the towel, but war with the Japanese was still raging. At that point, the United States concluded that Japan could be forced to surrender by the use of atomic bombs, a fierce amphibious invasion, or even both. The high casualty level from the Battle of Tarawa had given the Americans a hint of how tough the Japanese defenses were. The United States military could not help but expect a much higher casualty rate in the invasion of Japan’s mainland. During this time, Harry S. Truman was presiding over the United States in place of the deceased Franklin D. Roosevelt. America’s ultimate move to quench Japan was his call to make. As Truman contemplated the different advantages between an invasion and a nuclear bombing, the US military’s top brass proceeded with making and storing numerous medals, to be given out to those soldiers they anticipated would be wounded during the invasion. However, Truman ended up opting for the nuclear bombing. By the end of the war, over one million wounded or dead US servicemen from the war received the Purple Hearts. After decorating all the WWII casualties, the United States had almost 500,000 medals left. The Korean and Vietnam Wars would follow, so the remaining Purple Hearts were on standby, to be issued to wounded or killed servicemen. Indeed, the United States ordered more than another 35,000 medals in the years following the Vietnam Wars with the United States’ involvement in several conflicts. Out of these 35,000 medals, 21,000 were ordered in 2008. And it is quite simple to understand that these extra medals were not made because the Defense Logistics Agency had run out of Purple Hearts, but simply because it wanted to replenish its inventory to avoid a lack of the medals. WWII saw the first large-scale production of the Purple Heart, and with such massive amount released in 1945, it is believed that medals from WWII are still being issued. Notably, in the late 1970s and 1980s, the growth of terrorism and the mounting US military personnel casualty rate saw the revisiting of the Purple Hearts stockpile by the United States Defense Logistics Agency. Several of these WWII-era medals were declared unusable, while many more were refurbished and repackaged. The WWII-era Purple Heart had a different ribbon compared to contemporary ones. However, the refurbished medals were given new ribbons to bring them in line with modern requirements. These refurbished medals would look identical to the new ones, and it is possible for a new recipient to be unable to tell from which era the medal came from. Read another story from us: Medal of Honor Recipients are Entitled to Much More Than the Medal Including $1,300 a Month According to Half a Million Purple Hearts, an article by Kathryn Moore and D.M Gianfranco, during the controversies that resulted from the planned 50th anniversary display of the Enola Gay at the National Air and Space Museum, several people came forward to allege that the US military’s top brass had made up the extremely high casualty estimates of over a million servicemen in a bid to justify the use of the atomic bombs on the already beaten Japan in 1945. Consequently, they had made such a colossal amount of Purple Hearts to solidify their claims. With so many medals made and considering the statistics recorded over the years, it is quite possible that a few Purple Hearts from WWII will still be given to future recipients.
2
President Gerald Ford, Highest Elected Office – House of Representatives
b ( p-əld ; [1] born Leslie Lynch King Jr.; July 14, 1913 – December 26, 2006) was an American politician who served as the 38th president of the United States from 1974 to 1977. He previously served as the leader of the Republican Party in the House of Representatives from 1965 to 1973, when he was appointed the 40th vice president by President Richard Nixon, after the resignation of Spiro Agnew. Ford succeeded to the presidency when Nixon resigned in 1974, but was defeated for election to a full term in 1976. Ford is the only person to become U.S. president without winning an election for president or vice president. Born in Omaha, Nebraska, and raised in Grand Rapids, Michigan, Ford attended the University of Michigan, where he was a member of the school's football team, winning two national championships. Following his senior year, he turned down offers from the Detroit Lions and Green Bay Packers, instead opting to go to Yale Law School. [2] After the attack on Pearl Harbor, he enlisted in the U.S. Naval Reserve, serving from 1942 to 1946; he left as a lieutenant commander. Ford began his political career in 1949 as the U.S. representative from Michigan's 5th congressional district. He served in this capacity for nearly 25 years, the final nine of them as the House minority leader. In December 1973, two months after the resignation of Spiro Agnew, Ford became the first person appointed to the vice presidency under the terms of the 25th Amendment. After the subsequent resignation of President Nixon in August 1974, Ford immediately assumed the presidency. Domestically, Ford presided over the worst economy in the four decades since the Great Depression, with growing inflation and a recession during his tenure. [3] In one of his most controversial acts, he granted a presidential pardon to Richard Nixon for his role in the Watergate scandal. During Ford's presidency, foreign policy was characterized in procedural terms by the increased role Congress began to play, and by the corresponding curb on the powers of the president. [4] As president, Ford signed the Helsinki Accords, which marked a move toward détente in the Cold War. With the collapse of South Vietnam nine months into his presidency, U.S. involvement in the Vietnam War essentially ended. In the 1976 Republican presidential primary campaign, Ford defeated former California Governor Ronald Reagan for the Republican nomination, but narrowly lost the presidential election to the Democratic challenger, former Georgia Governor Jimmy Carter. Following his years as president, Ford remained active in the Republican Party. His moderate views on various social issues increasingly put him at odds with conservative members of the party in the 1990s and early 2000s. In retirement, Ford set aside the enmity he had felt towards Carter following the 1976 election, and the two former presidents developed a close friendship. After experiencing a series of health problems, he died at home on December 26, 2006, at age 93. Surveys of historians and political scientists have ranked Ford as a below-average president, [5] [6] [7] though retrospective public polls on his time in office were more positive. [8] [9] Ford in 1916 Ford was born Leslie Lynch King Jr. on July 14, 1913, at 3202 Woolworth Avenue in Omaha, Nebraska, where his parents lived with his paternal grandparents. He was the only child of Dorothy Ayer Gardner and Leslie Lynch King Sr., a wool trader. His father was the son of prominent banker Charles Henry King and Martha Alicia King (née Porter). Gardner separated from King just sixteen days after her son's birth. She took her son with her to Oak Park, Illinois, home of her sister Tannisse and brother-in-law, Clarence Haskins James. From there, she moved to the home of her parents, Levi Addison Gardner and Adele Augusta Ayer, in Grand Rapids, Michigan. Gardner and King divorced in December 1913, and she gained full custody of her son. Ford's paternal grandfather Charles Henry King paid child support until shortly before his death in 1930. [10] Ford later said that his biological father had a history of hitting his mother. [11] In a biography of Ford, James M. Cannon wrote that the separation and divorce of Ford's parents was sparked when, a few days after Ford's birth, Leslie King took a butcher knife and threatened to kill his wife, infant son, and Ford's nursemaid. Ford later told confidants that his father had first hit his mother when she had smiled at another man during their honeymoon. [12] After living with her parents for two and a half years, on February 1, 1917, Gardner married Gerald Rudolff Ford, a salesman in a family-owned paint and varnish company. Though never formally adopted, her young son was referred to as Gerald Rudolff Ford Jr. from then on; the name change was formalized on December 3, 1935. [13] He was raised in Grand Rapids with his three half-brothers from his mother's second marriage: Thomas Gardner "Tom" Ford (1918–1995), Richard Addison "Dick" Ford (1924–2015), and James Francis "Jim" Ford (1927–2001). [14] Ford was involved in the Boy Scouts of America, and earned that program's highest rank, Eagle Scout. [15] He is the only Eagle Scout to have ascended to the U.S. presidency. [15] Ford attended Grand Rapids South High School, where he was a star athlete and captain of the football team. [16] In 1930, he was selected to the All-City team of the Grand Rapids City League. He also attracted the attention of college recruiters. [17] Ford during practice as a center on the university of Michigan Wolverines football team, 1933 Ford attended the University of Michigan, where he played center, linebacker, and long snapper for the school's football team [18] and helped the Wolverines to two undefeated seasons and national titles in 1932 and 1933. In his senior year of 1934, the team suffered a steep decline and won only one game, but Ford was still the team's star player. In one of those games, Michigan held heavily favored Minnesota—the eventual national champion—to a scoreless tie in the first half. After the game, assistant coach Bennie Oosterbaan said, "When I walked into the dressing room at halftime, I had tears in my eyes I was so proud of them. Ford and [Cedric] Sweet played their hearts out. They were everywhere on defense." Ford later recalled, "During 25 years in the rough-and-tumble world of politics, I often thought of the experiences before, during, and after that game in 1934. Remembering them has helped me many times to face a tough situation, take action, and make every effort possible despite adverse odds." His teammates later voted Ford their most valuable player, with one assistant coach noting, "They felt Jerry was one guy who would stay and fight in a losing cause." [19] During Ford's senior year, a controversy developed when Georgia Tech said that it would not play a scheduled game with Michigan if a black player named Willis Ward took the field. Students, players and alumni protested, but university officials capitulated and kept Ward out of the game. Ford was Ward's best friend on the team, and they roomed together while on road trips. Ford reportedly threatened to quit the team in response to the university's decision, but he eventually agreed to play against Georgia Tech when Ward personally asked him to play. [20] In 1934, Ford was selected for the Eastern Team on the Shriner's East–West Shrine Game at San Francisco (a benefit for physically disabled children), played on January 1, 1935. As part of the 1935 Collegiate All-Star football team, Ford played against the Chicago Bears in the Chicago College All-Star Game at Soldier Field. [21] In honor of his athletic accomplishments and his later political career, the University of Michigan retired Ford's No. 48 jersey in 1994. With the blessing of the Ford family, it was placed back into circulation in 2012 as part of the Michigan Football Legends program and issued to sophomore linebacker Desmond Morgan before a home game against Illinois on October 13. [22] Throughout life, Ford remained interested in his school and football; he occasionally attended games. Ford also visited with players and coaches during practices; at one point, he asked to join the players in the huddle. [23] Before state events, Ford often had the Navy band play the University of Michigan fight song, "The Victors," instead of "Hail to the Chief." [24] Ford graduated from Michigan in 1935 with a Bachelor of Arts degree in economics. He turned down offers from the Detroit Lions and Green Bay Packers of the National Football League. Instead, he took a job in September 1935 as the boxing coach and assistant varsity football coach at Yale University [25] and applied to its law school. [26] Ford hoped to attend Yale Law School beginning in 1935. Yale officials at first denied his admission to the law school because of his full-time coaching responsibilities. He spent the summer of 1937 as a student at the University of Michigan Law School [27] and was eventually admitted in the spring of 1938 to Yale Law School. [25] That year he was also promoted to the position of junior varsity head football coach at Yale. [28] While at Yale, Ford began working as a model. He initially worked with the John Robert Powers agency before investing in Harry Conover's agency, with whom he modelled until 1941. [29] While attending Yale Law School, Ford joined a group of students led by R. Douglas Stuart Jr., and signed a petition to enforce the 1939 Neutrality Act. The petition was circulated nationally and was the inspiration for the America First Committee, a group determined to keep the U.S. out of World War II. [30] His introduction into politics was in the summer of 1940 when he worked for the Republican presidential campaign of Wendell Willkie. [25] Ford graduated in the top third of his class in 1941, and was admitted to the Michigan bar shortly thereafter. In May 1941, he opened a Grand Rapids law practice with a friend, Philip W. Buchen. [25] The Gunnery officers of USS Monterey , 1943. Ford is second from the right, in the front row. Following the December 7, 1941, attack on Pearl Harbor, Ford enlisted in the Navy. [31] He received a commission as ensign in the U.S. Naval Reserve on April 13, 1942. [32] On April 20, he reported for active duty to the V-5 instructor school at Annapolis, Maryland. After one month of training, he went to Navy Preflight School in Chapel Hill, North Carolina, where he was one of 83 instructors and taught elementary navigation skills, ordnance, gunnery, first aid, and military drill. In addition, he coached all nine sports that were offered, but mostly swimming, boxing, and football. During the year he was at the Preflight School, he was promoted to Lieutenant, Junior Grade, on June 2, 1942, and to lieutenant, in March 1943. [33] After Ford applied for sea duty, he was sent in May 1943 to the pre-commissioning detachment for the new aircraft carrier USS i p , at New York Shipbuilding Corporation, Camden, New Jersey. From the ship's commissioning on June 17, 1943, until the end of December 1944, Ford served as the assistant navigator, Athletic Officer, and antiaircraft battery officer on board the Monterey. While he was on board, the carrier participated in many actions in the Pacific Theater with the Third and Fifth Fleets in late 1943 and 1944. In 1943, the carrier helped secure Makin Island in the Gilberts, and participated in carrier strikes against Kavieng, Papua New Guinea in 1943. During the spring of 1944, the Monterey supported landings at Kwajalein and Eniwetok and participated in carrier strikes in the Marianas, Western Carolines, and northern New Guinea, as well as in the Battle of the Philippine Sea. [34] After an overhaul, from September to November 1944, aircraft from the Monterey launched strikes against Wake Island, participated in strikes in the Philippines and Ryukyus, and supported the landings at Leyte and Mindoro. [34] Although the ship was not damaged by the Empire of Japan's forces, the i was one of several ships damaged by Typhoon Cobra that hit Admiral William Halsey's Third Fleet on December 18–19, 1944. The Third Fleet lost three destroyers and over 800 men during the typhoon. The Monterey was damaged by a fire, which was started by several of the ship's aircraft tearing loose from their cables and colliding on the hangar deck. Ford was serving as General Quarters Officer of the Deck and was ordered to go below to assess the raging fire. He did so safely, and reported his findings back to the ship's commanding officer, Captain Stuart H. Ingersoll. The ship's crew was able to contain the fire, and the ship got underway again. [35] After the fire, the i was declared unfit for service. Ford was detached from the ship and sent to the Navy Pre-Flight School at Saint Mary's College of California, where he was assigned to the Athletic Department until April 1945. From the end of April 1945 to January 1946, he was on the staff of the Naval Reserve Training Command, Naval Air Station, Glenview, Illinois, at the rank of lieutenant commander. [25] Ford received the following military awards: the American Campaign Medal, the Asiatic-Pacific Campaign Medal with nine p 316 " bronze stars (for operations in the Gilbert Islands, Bismarck Archipelago, Marshall Islands, Asiatic and Pacific carrier raids, Hollandia, Marianas, Western Carolines, Western New Guinea, and the Leyte Operation), the Philippine Liberation Medal with two 3⁄16 " bronze stars (for Leyte and Mindoro), and the World War II Victory Medal. [31] He was honorably discharged in February 1946. [25] U.S. House of Representatives (1949–1973) A billboard for Ford's 1948 congressional campaign from Michigan's 5th district After Ford returned to Grand Rapids in 1946, he became active in local Republican politics, and supporters urged him to challenge Bartel J. Jonkman, the incumbent Republican congressman. Military service had changed his view of the world. "I came back a converted internationalist", Ford wrote, "and of course our congressman at that time was an avowed, dedicated isolationist. And I thought he ought to be replaced. Nobody thought I could win. I ended up winning two to one." [17] During his first campaign in 1948, Ford visited voters at their doorsteps and as they left the factories where they worked. [36] Ford also visited local farms where, in one instance, a wager resulted in Ford spending two weeks milking cows following his election victory. [37] Ford was a member of the House of Representatives for 25 years, holding Michigan's 5th congressional district seat from 1949 to 1973. It was a tenure largely notable for its modesty. As an editorial in The New York Times described him, Ford "saw himself as a negotiator and a reconciler, and the record shows it: he did not write a single piece of major legislation in his entire career." [38] Appointed to the House Appropriations Committee two years after being elected, he was a prominent member of the Defense Appropriations Subcommittee. Ford described his philosophy as "a moderate in domestic affairs, an internationalist in foreign affairs, and a conservative in fiscal policy." [39] He voted in favor of the Civil Rights Acts of 1957, [40] [41] 1960, [42] [43] 1964, [44] [45] and 1968, [46] [47] as well as the 24th Amendment to the U.S. Constitution and the Voting Rights Act of 1965. [48] [49] [50] Ford was known to his colleagues in the House as a "Congressman's Congressman". [51] In the early 1950s, Ford declined offers to run for either the Senate or the Michigan governorship. Rather, his ambition was to become Speaker of the House, [52] which he called "the ultimate achievement. To sit up there and be the head honcho of 434 other people and have the responsibility, aside from the achievement, of trying to run the greatest legislative body in the history of mankind ... I think I got that ambition within a year or two after I was in the House of Representatives". [31] The Warren Commission (Ford 4th from left) presents its report to President Johnson (1964). On November 29, 1963, President Lyndon B. Johnson appointed Ford to the Warren Commission, a special task force set up to investigate the assassination of President John F. Kennedy. [53] Ford was assigned to prepare a biography of accused assassin Lee Harvey Oswald. He and Earl Warren also interviewed Jack Ruby, Oswald's killer. According to a 1963 FBI memo that was released to the public in 2008, Ford was in contact with the FBI throughout his time on the Warren Commission and relayed information to the deputy director, Cartha DeLoach, about the panel's activities. [54] [55] [56] In the preface to his book, A Presidential Legacy and The Warren Commission, Ford defended the work of the commission and reiterated his support of its conclusions. [57] House Minority Leader (1965–1973) Congressman Gerald Ford, MSFC director Wernher von Braun, Congressman George H. Mahon, and NASA Administrator James E. Webb visit the Marshall Space Flight Center for a briefing on the Saturn program, 1964. In 1964, Lyndon Johnson led a landslide victory for his party, secured another term as president and took 36 seats from Republicans in the House of Representatives. Following the election, members of the Republican caucus looked to select a new minority leader. Three members approached Ford to see if he would be willing to serve; after consulting with his family, he agreed. After a closely contested election, Ford was chosen to replace Charles Halleck of Indiana as minority leader. [58] The members of the Republican caucus that encouraged and eventually endorsed Ford to run as the House minority leader were later known as the "Young Turks" and one of the members of the Young Turks was congressman Donald H. Rumsfeld from Illinois's 13th congressional district, who later on would serve in Ford's administration as the chief of staff and secretary of defense. [59] With a Democratic majority in both the House of Representatives and the Senate, the Johnson Administration proposed and passed a series of programs that was called by Johnson the "Great Society". During the first session of the Eighty-ninth Congress alone, the Johnson Administration submitted 87 bills to Congress, and Johnson signed 84, or 96%, arguably the most successful legislative agenda in Congressional history. [60] In 1966, criticism over the Johnson Administration's handling of the Vietnam War began to grow, with Ford and Congressional Republicans expressing concern that the United States was not doing what was necessary to win the war. Public sentiment also began to move against Johnson, and the 1966 midterm elections produced a 47-seat swing in favor of the Republicans. This was not enough to give Republicans a majority in the House, but the victory gave Ford the opportunity to prevent the passage of further Great Society programs. [58] Ford's private criticism of the Vietnam War became public knowledge after he spoke from the floor of the House and questioned whether the White House had a clear plan to bring the war to a successful conclusion. [58] The speech angered President Johnson, who accused Ford of having played "too much football without a helmet". [58] [61] As minority leader in the House, Ford appeared in a popular series of televised press conferences with Illinois Senator Everett Dirksen, in which they proposed Republican alternatives to Johnson's policies. Many in the press jokingly called this "The Ev and Jerry Show." [62] Johnson said at the time, "Jerry Ford is so dumb he can't fart and chew gum at the same time." [63] The press, used to sanitizing Johnson's salty language, reported this as "Gerald Ford can't walk and chew gum at the same time." [64] After Richard Nixon was elected president in November 1968, Ford's role shifted to being an advocate for the White House agenda. Congress passed several of Nixon's proposals, including the National Environmental Policy Act and the Tax Reform Act of 1969. Another high-profile victory for the Republican minority was the State and Local Fiscal Assistance act. Passed in 1972, the act established a Revenue Sharing program for state and local governments. [65] Ford's leadership was instrumental in shepherding revenue sharing through Congress, and resulted in a bipartisan coalition that supported the bill with 223 votes in favor (compared with 185 against). [58] [66] During the eight years (1965–1973) that Ford served as minority leader, he won many friends in the House because of his fair leadership and inoffensive personality. [58] Vice presidency (1973–1974) Gerald and Betty Ford with the President and First Lady Pat Nixon after President Nixon nominated Ford to be vice president, October 13, 1973 For the past decade, Ford had been unsuccessfully working to help Republicans across the country get a majority in the chamber so that he could become House Speaker. He promised his wife that he would try again in 1974 then retire in 1976. [31] However, on October 10, 1973, Spiro Agnew resigned from the vice presidency. [67] According to The New York Times, Nixon "sought advice from senior Congressional leaders about a replacement." The advice was unanimous. House Speaker Carl Albert recalled later, "We gave Nixon no choice but Ford." [38] Ford agreed to the nomination, telling his wife that the vice presidency would be "a nice conclusion" to his career. [31] Ford was nominated to take Agnew's position on October 12, the first time the vice-presidential vacancy provision of the 25th Amendment had been implemented. The United States Senate voted 92 to 3 to confirm Ford on November 27. On December 6, the House confirmed Ford by a vote of 387 to 35. After the confirmation vote in the House, Ford took the oath of office as vice president. [25] Ford became Vice President as the Watergate scandal was unfolding. On August 1, 1974, Chief of Staff Alexander Haig contacted Ford to tell him to prepare for the presidency. [25] At the time, Ford and his wife, Betty, were living in suburban Virginia, waiting for their expected move into the newly designated vice president's residence in Washington, D.C. However, "Al Haig asked to come over and see me", Ford later said, "to tell me that there would be a new tape released on a Monday, and he said the evidence in there was devastating and there would probably be either an impeachment or a resignation. And he said, 'I'm just warning you that you've got to be prepared, that things might change dramatically and you could become President.' And I said, 'Betty, I don't think we're ever going to live in the vice president's house.'" [17] Gerald Ford is sworn in as president by Chief Justice Warren Burger in the White House East Room, while Betty Ford looks on. When Nixon resigned on August 9, 1974, Ford automatically assumed the presidency, taking the oath of office in the East Room of the White House. This made him the only person to become the nation's chief executive without being elected to the presidency or the vice presidency. Immediately afterward, he spoke to the assembled audience in a speech that was broadcast live to the nation, [68] [69] noting the peculiarity of his position. [70] He later declared that "our long national nightmare is over". [71] On August 20, Ford nominated former New York Governor Nelson Rockefeller to fill the vice presidency he had vacated. [72] Rockefeller's top competitor had been George H. W. Bush. Rockefeller underwent extended hearings before Congress, which caused embarrassment when it was revealed he made large gifts to senior aides, such as Henry Kissinger. Although conservative Republicans were not pleased that Rockefeller was picked, most of them voted for his confirmation, and his nomination passed both the House and Senate. Some, including Barry Goldwater, voted against him. [73] President Ford appears at a House Judiciary Subcommittee hearing in reference to his pardon of Richard Nixon. On September 8, 1974, Ford issued Proclamation 4311, which gave Nixon a full and unconditional pardon for any crimes he might have committed against the United States while president. [74] [75] [76] In a televised broadcast to the nation, Ford explained that he felt the pardon was in the best interests of the country, and that the Nixon family's situation "is a tragedy in which we all have played a part. It could go on and on and on, or someone must write the end to it. I have concluded that only I can do that, and if I can, I must." [77] Ford's decision to pardon Nixon was highly controversial. Critics derided the move and said a "corrupt bargain" had been struck between the two men, [17] in which Ford's pardon was granted in exchange for Nixon's resignation, elevating Ford to the presidency. Ford's first press secretary and close friend Jerald terHorst resigned his post in protest after the pardon. [78] According to Bob Woodward, Nixon Chief of Staff Alexander Haig proposed a pardon deal to Ford. He later decided to pardon Nixon for other reasons, primarily the friendship he and Nixon shared. [79] Regardless, historians believe the controversy was one of the major reasons Ford lost the 1976 presidential election, an observation with which Ford agreed. [79] In an editorial at the time, The New York Times stated that the Nixon pardon was a "profoundly unwise, divisive and unjust act" that in a stroke had destroyed the new president's "credibility as a man of judgment, candor and competence". [38] On October 17, 1974, Ford testified before Congress on the pardon. He was the first sitting president since Abraham Lincoln to testify before the House of Representatives. [80] [81] In the months following the pardon, Ford often declined to mention President Nixon by name, referring to him in public as "my predecessor" or "the former president." When Ford was pressed on the matter on a 1974 trip to California, White House correspondent Fred Barnes recalled that he replied "I just can't bring myself to do it." [82] After Ford left the White House in January 1977, he privately justified his pardon of Nixon by carrying in his wallet a portion of the text of Burdick v. United States , a 1915 U.S. Supreme Court decision which stated that a pardon indicated a presumption of guilt, and that acceptance of a pardon was tantamount to a confession of that guilt. [83] In 2001, the John F. Kennedy Library Foundation awarded the John F. Kennedy Profile in Courage Award to Ford for his pardon of Nixon. [84] In presenting the award to Ford, Senator Edward Kennedy said that he had initially been opposed to the pardon, but later decided that history had proven Ford to have made the correct decision. [85] Draft dodgers and deserters On September 16 (shortly after he pardoned Nixon), Ford issued Presidential Proclamation 4313, which introduced a conditional amnesty program for military deserters and Vietnam War draft dodgers who had fled to countries such as Canada. The conditions of the amnesty required that those reaffirm their allegiance to the United States and serve two years working in a public service job or a total of two years service for those who had served less than two years of honorable service in the military. [86] The program for the Return of Vietnam Era Draft Evaders and Military Deserters [87] established a Clemency Board to review the records and make recommendations for receiving a Presidential Pardon and a change in Military discharge status. Full pardon for draft dodgers came in the Carter administration. [88] When Ford assumed office, he inherited Nixon's Cabinet. During his brief administration, he replaced all members except Secretary of State Kissinger and Secretary of the Treasury William E. Simon. Political commentators have referred to Ford's dramatic reorganization of his Cabinet in the fall of 1975 as the "Halloween Massacre". One of Ford's appointees, William Coleman—the Secretary of Transportation—was the second black man to serve in a presidential cabinet (after Robert C. Weaver) and the first appointed in a Republican administration. [89] Ford selected George H. W. Bush as Chief of the US Liaison Office to the People's Republic of China in 1974, and then Director of the Central Intelligence Agency in late 1975. [90] Ford's transition chairman and first Chief of Staff was former congressman and ambassador Donald Rumsfeld. In 1975, Rumsfeld was named by Ford as the youngest-ever Secretary of Defense. Ford chose a young Wyoming politician, Richard Cheney, to replace Rumsfeld as his new Chief of Staff; Cheney became the campaign manager for Ford's 1976 presidential campaign. [91] The 1974 Congressional midterm elections took place in the wake of the Watergate scandal and less than three months after Ford assumed office. The Democratic Party turned voter dissatisfaction into large gains in the House elections, taking 49 seats from the Republican Party, increasing their majority to 291 of the 435 seats. This was one more than the number needed (290) for a two-thirds majority, the number necessary to override a Presidential veto or to propose a constitutional amendment. Perhaps due in part to this fact, the 94th Congress overrode the highest percentage of vetoes since Andrew Johnson was President of the United States (1865–1869). [92] Even Ford's former, reliably Republican House seat was won by a Democrat, Richard Vander Veen, who defeated Robert VanderLaan. In the Senate elections, the Democratic majority became 61 in the 100-seat body. [93] Ford meeting with his Cabinet, 1975 The economy was a great concern during the Ford administration. One of the first acts the new president took to deal with the economy was to create, by Executive Order on September 30, 1974, the Economic Policy Board. [94] In October 1974, in response to rising inflation, Ford went before the American public and asked them to "Whip Inflation Now". As part of this program, he urged people to wear "WIN" buttons. [95] At the time, inflation was believed to be the primary threat to the economy, more so than growing unemployment; there was a belief that controlling inflation would help reduce unemployment. [94] To rein in inflation, it was necessary to control the public's spending. To try to mesh service and sacrifice, "WIN" called for Americans to reduce their spending and consumption. [96] On October 4, 1974, Ford gave a speech in front of a joint session of Congress; as a part of this speech he kicked off the "WIN" campaign. Over the next nine days, 101,240 Americans mailed in "WIN" pledges. [94] In hindsight, this was viewed as simply a public relations gimmick which had no way of solving the underlying problems. [97] The main point of that speech was to introduce to Congress a one-year, five-percent income tax increase on corporations and wealthy individuals. This plan would also take $4.4 billion out of the budget, bringing federal spending below $300 billion. [98] At the time, inflation was over twelve percent. [99] Ford and his golden retriever, Liberty, in the Oval Office, 1974 The federal budget ran a deficit every year Ford was president. [100] Despite his reservations about how the program ultimately would be funded in an era of tight public budgeting, Ford signed the Education for All Handicapped Children Act of 1975, which established special education throughout the United States. Ford expressed "strong support for full educational opportunities for our handicapped children" according to the official White House press release for the bill signing. [101] The economic focus began to change as the country sank into the worst recession since the Great Depression four decades earlier. [102] The focus of the Ford administration turned to stopping the rise in unemployment, which reached nine percent in May 1975. [103] In January 1975, Ford proposed a 1-year tax reduction of $16 billion to stimulate economic growth, along with spending cuts to avoid inflation. [98] Ford was criticized for abruptly switching from advocating a tax increase to a tax reduction. In Congress, the proposed amount of the tax reduction increased to $22.8 billion in tax cuts and lacked spending cuts. [94] In March 1975, Congress passed, and Ford signed into law, these income tax rebates as part of the Tax Reduction Act of 1975. This resulted in a federal deficit of around $53 billion for the 1975 fiscal year and $73.7 billion for 1976. [104] When New York City faced bankruptcy in 1975, Mayor Abraham Beame was unsuccessful in obtaining Ford's support for a federal bailout. The incident prompted the New York Daily News ' famous headline "Ford to City: Drop Dead", referring to a speech in which "Ford declared flatly ... that he would veto any bill calling for 'a federal bail-out of New York City'". [105] [106] Ford was confronted with a potential swine flu pandemic. In the early 1970s, an influenza strain H1N1 shifted from a form of flu that affected primarily pigs and crossed over to humans. On February 5, 1976, an army recruit at Fort Dix mysteriously died and four fellow soldiers were hospitalized; health officials announced that "swine flu" was the cause. Soon after, public health officials in the Ford administration urged that every person in the United States be vaccinated. [107] Although the vaccination program was plagued by delays and public relations problems, some 25% of the population was vaccinated by the time the program was canceled in December 1976. [108] Equal rights and abortion Cheney, Rumsfeld and Ford in the Oval Office, 1975 Ford was an outspoken supporter of the Equal Rights Amendment, issuing Presidential Proclamation no. 4383 in 1975: In this Land of the Free, it is right, and by nature it ought to be, that all men and all women are equal before the law. Now, therefore, I, Gerald R. Ford, President of the United States of America, to remind all Americans that it is fitting and just to ratify the Equal Rights Amendment adopted by the Congress of the United States of America, in order to secure legal equality for all women and men, do hereby designate and proclaim August 26, 1975, as Women's Equality Day. [109] As president, Ford's position on abortion was that he supported "a federal constitutional amendment that would permit each one of the 50 States to make the choice". [110] This had also been his position as House Minority Leader in response to the 1973 Supreme Court case of Roe v. Wade , which he opposed. [111] Ford came under criticism when First Lady Betty Ford entered the debate over abortion during an August 1975 interview for 60 Minutes , in which she stated that Roe v. Wade was a "great, great decision". [112] During his later life, Ford would identify as pro-choice. [113] Ford meets with Soviet leader Leonid Brezhnev to sign a joint communiqué on the SALT treaty during the Vladivostok Summit, November 1974. Ford continued the détente policy with both the Soviet Union and China, easing the tensions of the Cold War. Still in place from the Nixon administration was the Strategic Arms Limitation Treaty (SALT). [114] The thawing relationship brought about by Nixon's visit to China was reinforced by Ford's own visit in December 1975. [115] The Administration entered into the Helsinki Accords [116] with the Soviet Union in 1975, creating the framework of the Helsinki Watch, an independent non-governmental organization created to monitor compliance which later evolved into Human Rights Watch. [117] Ford attended the inaugural meeting of the Group of Seven (G7) industrialized nations (initially the G5) in 1975 and secured membership for Canada. Ford supported international solutions to issues. "We live in an interdependent world and, therefore, must work together to resolve common economic problems," he said in a 1974 speech. [118] In November 1975, Ford adopted the global human population control recommendations of National Security Study Memorandum 200 – a national security directive initially commissioned by Nixon – as United States policy in the subsequent NSDM 314. [119] [120] The plan explicitly states the goal was population control and not improving the lives of individuals despite instructing organizers to "emphasize development and improvements in the quality of life of the poor", later explaining the projects were "primarily for other reasons". [121] [122] Upon approving the plan, Ford stated "United States leadership is essential to combat population growth, to implement the World Population Plan of Action and to advance United States security and overseas interests". [123] Population control policies were adopted to protect American economic and military interests, with the memorandum arguing that population growth in developing countries resulted with such nations gaining global political power, that more citizens posed a risk to accessing foreign natural resources while also making American businesses vulnerable to governments seeking to fund a growing population, and that younger generations born would be prone to anti-establishment behavior, increasing political instability. [119] [123] [124] Countries visited by Ford during his presidency In the Middle East and eastern Mediterranean, two ongoing international disputes developed into crises. The Cyprus dispute turned into a crisis with the Turkish invasion of Cyprus in July 1974, causing extreme strain within the North Atlantic Treaty Organization (NATO) alliance. In mid-August, the Greek government withdrew Greece from the NATO military structure; in mid-September, the Senate and House of Representatives overwhelmingly voted to halt military aid to Turkey. Ford, concerned with both the effect of this on Turkish-American relations and the deterioration of security on NATO's eastern front, vetoed the bill. A second bill was then passed by Congress, which Ford also vetoed, fearing that it might impede negotiations in Cyprus, although a compromise was accepted to continue aid until December 10, 1974, provided Turkey would not send American supplies to Cyprus. [4] U.S. military aid to Turkey was suspended on February 5, 1975. [4] Ford with Anwar Sadat in Salzburg, 1975 In the continuing Arab–Israeli conflict, although the initial cease fire had been implemented to end active conflict in the Yom Kippur War, Kissinger's continuing shuttle diplomacy was showing little progress. Ford considered it "stalling" and wrote, "Their [Israeli] tactics frustrated the Egyptians and made me mad as hell." [125] During Kissinger's shuttle to Israel in early March 1975, a last minute reversal to consider further withdrawal, prompted a cable from Ford to Prime Minister Yitzhak Rabin, which included: I wish to express my profound disappointment over Israel's attitude in the course of the negotiations ... Failure of the negotiation will have a far reaching impact on the region and on our relations. I have given instructions for a reassessment of United States policy in the region, including our relations with Israel, with the aim of ensuring that overall American interests ... are protected. You will be notified of our decision. [126] On March 24, Ford informed congressional leaders of both parties of the reassessment of the administration's policies in the Middle East. In practical terms, "reassessment" meant canceling or suspending further aid to Israel. For six months between March and September 1975, the United States refused to conclude any new arms agreements with Israel. Rabin notes it was "an innocent-sounding term that heralded one of the worst periods in American-Israeli relations". [127] The announced reassessments upset the American Jewish community and Israel's well-wishers in Congress. On May 21, Ford "experienced a real shock" when seventy-six U.S. senators wrote him a letter urging him to be "responsive" to Israel's request for $2.59 billion (equivalent to $14.09 billion in 2022) in military and economic aid. Ford felt truly annoyed and thought the chance for peace was jeopardized. It was, since the September 1974 ban on arms sales to Turkey, the second major congressional intrusion upon the President's foreign policy prerogatives. [128] The following summer months were described by Ford as an American-Israeli "war of nerves" or "test of wills". [129] After much bargaining, the Sinai Interim Agreement (Sinai II) was formally signed on September 1, and aid resumed. Ford and his daughter Susan watch as Henry Kissinger (right) shakes hands with Mao Zedong, December 2, 1975. One of Ford's greatest challenges was dealing with the continuing Vietnam War. American offensive operations against North Vietnam had ended with the Paris Peace Accords, signed on January 27, 1973. The accords declared a cease-fire across both North and South Vietnam, and required the release of American prisoners of war. The agreement guaranteed the territorial integrity of Vietnam and, like the Geneva Conference of 1954, called for national elections in the North and South. The Paris Peace Accords stipulated a sixty-day period for the total withdrawal of U.S. forces. [130] The agreements were negotiated by US National Security Advisor Henry Kissinger and North Vietnamese Politburo member Lê Đức Thọ. South Vietnamese President Nguyen Van Thieu was not involved in the final negotiations, and publicly criticized the proposed agreement. However, anti-war pressures within the United States forced Nixon and Kissinger to pressure Thieu to sign the agreement and enable the withdrawal of American forces. In multiple letters to the South Vietnamese president, Nixon had promised that the United States would defend Thieu's government, should the North Vietnamese violate the accords. [131] In December 1974, months after Ford took office, North Vietnamese forces invaded the province of Phuoc Long. General Trần Văn Trà sought to gauge any South Vietnamese or American response to the invasion, as well as to solve logistical issues, before proceeding with the invasion. [132] As North Vietnamese forces advanced, Ford requested Congress approve a $722 million aid package for South Vietnam (equivalent to $3.93 billion in 2022), funds that had been promised by the Nixon administration. Congress voted against the proposal by a wide margin. [114] Senator Jacob K. Javits offered "...large sums for evacuation, but not one nickel for military aid". [114] President Thieu resigned on April 21, 1975, publicly blaming the lack of support from the United States for the fall of his country. [133] Two days later, on April 23, Ford gave a speech at Tulane University. In that speech, he announced that the Vietnam War was over "...as far as America is concerned". [131] The announcement was met with thunderous applause. [131] 1,373 U.S. citizens and 5,595 Vietnamese and third-country nationals were evacuated from the South Vietnamese capital of Saigon during Operation Frequent Wind. Many of the Vietnamese evacuees were allowed to enter the United States under the Indochina Migration and Refugee Assistance Act. The 1975 Act appropriated $455 million (equivalent to $2.47 billion in 2022) toward the costs of assisting the settlement of Indochinese refugees. [134] In all, 130,000 Vietnamese refugees came to the United States in 1975. Thousands more escaped in the years that followed. [135] North Vietnam's victory over the South led to a considerable shift in the political winds in Asia, and Ford administration officials worried about a consequent loss of U.S. influence there. The administration proved it was willing to respond forcefully to challenges to its interests in the region when Khmer Rouge forces seized an American ship in international waters. [136] The main crisis was the Mayaguez incident. In May 1975, shortly after the fall of Saigon and the Khmer Rouge conquest of Cambodia, Cambodians seized the American merchant ship Mayaguez in international waters. [137] Ford dispatched Marines to rescue the crew, but the Marines landed on the wrong island and met unexpectedly stiff resistance just as, unknown to the U.S., the Mayaguez sailors were being released. In the operation, two military transport helicopters carrying the Marines for the assault operation were shot down, and 41 U.S. servicemen were killed and 50 wounded, while approximately 60 Khmer Rouge soldiers were killed. [138] Despite the American losses, the operation was seen as a success in the United States, and Ford enjoyed an 11-point boost in his approval ratings in the aftermath. [139] The Americans killed during the operation became the last to have their names inscribed on the Vietnam Veterans Memorial wall in Washington, D.C. Some historians have argued that the Ford administration felt the need to respond forcefully to the incident because it was construed as a Soviet plot. [140] But work by Andrew Gawthorpe, published in 2009, based on an analysis of the administration's internal discussions, shows that Ford's national security team understood that the seizure of the vessel was a local, and perhaps even accidental, provocation by an immature Khmer government. Nevertheless, they felt the need to respond forcefully to discourage further provocations by other Communist countries in Asia. [141] Reaction immediately after the second assassination attempt Ford was the target of two assassination attempts during his presidency. In Sacramento, California, on September 5, 1975, Lynette "Squeaky" Fromme, a follower of Charles Manson, pointed a Colt .45-caliber handgun at Ford and pulled the trigger at point-blank range. [142] [143] As she did, Larry Buendorf, [144] a Secret Service agent, grabbed the gun, and Fromme was taken into custody. She was later convicted of attempted assassination of the President and was sentenced to life in prison; she was paroled on August 14, 2009, after serving 34 years. [145] In reaction to this attempt, the Secret Service began keeping Ford at a more secure distance from anonymous crowds, a strategy that may have saved his life seventeen days later. As he left the St. Francis Hotel in downtown San Francisco, Sara Jane Moore, standing in a crowd of onlookers across the street, fired a .38-caliber revolver at him. The shot missed Ford by a few feet. [142] [146] Before she fired a second round, retired Marine Oliver Sipple grabbed at the gun and deflected her shot; the bullet struck a wall about six inches above and to the right of Ford's head, then ricocheted and hit a taxi driver, who was slightly wounded. Moore was later sentenced to life in prison. She was paroled on December 31, 2007, after serving 32 years. [147] John Paul Stevens, Ford's only Supreme Court appointment In 1975, Ford appointed John Paul Stevens as Associate Justice of the Supreme Court of the United States to replace retiring Justice William O. Douglas. Stevens had been a judge of the United States Court of Appeals for the Seventh Circuit, appointed by President Nixon. [148] During his tenure as House Republican leader, Ford had led efforts to have Douglas impeached. [149] After being confirmed, Stevens eventually disappointed some conservatives by siding with the Court's liberal wing regarding the outcome of many key issues. [150] Nevertheless, in 2005 Ford praised Stevens. "He has served his nation well," Ford said of Stevens, "with dignity, intellect and without partisan political concerns." [151] Other judicial appointments Ford appointed 11 judges to the United States Courts of Appeals, and 50 judges to the United States district courts. [152] 1976 presidential election Governor Ronald Reagan congratulates President Ford after the president successfully wins the 1976 Republican nomination, while Bob Dole, Nancy Reagan, and Nelson Rockefeller look on. Ford reluctantly agreed to run for office in 1976, but first he had to counter a challenge for the Republican party nomination. Former Governor of California Ronald Reagan and the party's conservative wing faulted Ford for failing to do more in South Vietnam, for signing the Helsinki Accords, and for negotiating to cede the Panama Canal. (Negotiations for the canal continued under President Carter, who eventually signed the Torrijos–Carter Treaties.) Reagan launched his campaign in autumn of 1975 and won numerous primaries, including North Carolina, Texas, Indiana, and California, but failed to get a majority of delegates; Reagan withdrew from the race at the Republican Convention in Kansas City, Missouri. The conservative insurgency did lead to Ford dropping the more liberal Vice President Nelson Rockefeller in favor of U.S. Senator Bob Dole of Kansas. [153] In addition to the pardon dispute and lingering anti-Republican sentiment, Ford had to counter a plethora of negative media imagery. Chevy Chase often did pratfalls on Saturday Night Live , imitating Ford, who had been seen stumbling on two occasions during his term. As Chase commented, "He even mentioned in his own autobiography it had an effect over a period of time that affected the election to some degree." [154] Ford's 1976 election campaign benefitted from his being an incumbent president during several anniversary events held during the period leading up to the United States Bicentennial. The Washington, D.C. fireworks display on the Fourth of July was presided over by the President and televised nationally. [155] On July 7, 1976, the President and First Lady served as hosts at a White House state dinner for Queen Elizabeth II and Prince Philip of the United Kingdom, which was televised on the Public Broadcasting Service network. The 200th anniversary of the Battles of Lexington and Concord in Massachusetts gave Ford the opportunity to deliver a speech to 110,000 in Concord acknowledging the need for a strong national defense tempered with a plea for "reconciliation, not recrimination" and "reconstruction, not rancor" between the United States and those who would pose "threats to peace". [156] Speaking in New Hampshire on the previous day, Ford condemned the growing trend toward big government bureaucracy and argued for a return to "basic American virtues". [157] Jimmy Carter and Ford in a presidential debate, September 23, 1976 Televised presidential debates were reintroduced for the first time since the 1960 election. As such, Ford became the first incumbent president to participate in one. Carter later attributed his victory in the election to the debates, saying they "gave the viewers reason to think that Jimmy Carter had something to offer". The turning point came in the second debate when Ford blundered by stating, "There is no Soviet domination of Eastern Europe and there never will be under a Ford Administration." Ford also said that he did not "believe that the Poles consider themselves dominated by the Soviet Union". [158] In an interview years later, Ford said he had intended to imply that the Soviets would never crush the spirits of eastern Europeans seeking independence. However, the phrasing was so awkward that questioner Max Frankel was visibly incredulous at the response. [159] 1976 electoral vote results In the end, Carter won the election, receiving 50.1% of the popular vote and 297 electoral votes compared with 48.0% and 240 electoral votes for Ford. [160] Post-presidency (1977–2006) The Nixon pardon controversy eventually subsided. Ford's successor, Jimmy Carter, opened his 1977 inaugural address by praising the outgoing president, saying, "For myself and for our Nation, I want to thank my predecessor for all he has done to heal our land." [161] After leaving the White House, the Fords moved to Denver, Colorado. Ford successfully invested in oil with Marvin Davis, which later provided an income for Ford's children. [162] He continued to make appearances at events of historical and ceremonial significance to the nation, such as presidential inaugurals and memorial services. In January 1977, he became the president of Eisenhower Fellowships in Philadelphia, then served as the chairman of its board of trustees from 1980 to 1986. [163] Later in 1977, he reluctantly agreed to be interviewed by James M. Naughton, a New York Times journalist who was given the assignment to write the former president's advance obituary, an article that would be updated prior to its eventual publication. [164] In 1979, Ford published his autobiography, A Time to Heal (Harper/Reader's Digest, 454 pages). A review in Foreign Affairs described it as, "Serene, unruffled, unpretentious, like the author. This is the shortest and most honest of recent presidential memoirs, but there are no surprises, no deep probings of motives or events. No more here than meets the eye." [165] During the term of office of his successor, Jimmy Carter, Ford received monthly briefs by President Carter's senior staff on international and domestic issues, and was always invited to lunch at the White House whenever he was in Washington, D.C. Their close friendship developed after Carter had left office, with the catalyst being their trip together to the funeral of Anwar el-Sadat in 1981. [166] Until Ford's death, Carter and his wife, Rosalynn, visited the Fords' home frequently. [167] Ford and Carter served as honorary co-chairs of the National Commission on Federal Election Reform in 2001 and of the Continuity of Government Commission in 2002. Like Presidents Carter, George H. W. Bush, and Bill Clinton, Ford was an honorary co-chair of the Council for Excellence in Government, a group dedicated to excellence in government performance, which provides leadership training to top federal employees. He also devoted much time to his love of golf, often playing both privately and in public events with comedian Bob Hope, a longtime friend. In 1977, he shot a hole in one during a Pro-am held in conjunction with the Danny Thomas Memphis Classic at Colonial Country Club in Memphis, Tennessee. He hosted the Jerry Ford Invitational in Vail, Colorado from 1977 to 1996.[ p ] In 1977, Ford established the Gerald R. Ford Institute of Public Policy at Albion College in Albion, Michigan, to give undergraduates training in public policy. In April 1981, he opened the Gerald R. Ford Library in Ann Arbor, Michigan, on the north campus of his alma mater, the University of Michigan, [168] followed in September by the Gerald R. Ford Museum in Grand Rapids. [169] [170] Ford considered a run for the Republican nomination in 1980, forgoing numerous opportunities to serve on corporate boards to keep his options open for a rematch with Carter. Ford attacked Carter's conduct of the SALT II negotiations and foreign policy in the Middle East and Africa. Many have argued that Ford also wanted to exorcise his image as an "Accidental President" and to win a term in his own right. Ford also believed the more conservative Ronald Reagan would be unable to defeat Carter and would hand the incumbent a second term. Ford was encouraged by his former Secretary of State, Henry Kissinger as well as Jim Rhodes of Ohio and Bill Clements of Texas to make the race. On March 15, 1980, Ford announced that he would forgo a run for the Republican nomination, vowing to support the eventual nominee. [171] On July 16, 1980 (day 3 of the 1980 Republican National Convention) Gerald Ford consults with Bob Dole, Howard Baker and Bill Brock before ultimately making a decision to decline the offer to serve as Ronald Reagan's running mate. After securing the Republican nomination in 1980, Ronald Reagan considered his former rival Ford as a potential vice-presidential running mate, but negotiations between the Reagan and Ford camps at the Republican National Convention were unsuccessful. Ford conditioned his acceptance on Reagan's agreement to an unprecedented "co-presidency", [172] giving Ford the power to control key executive branch appointments (such as Kissinger as Secretary of State and Alan Greenspan as Treasury Secretary). After rejecting these terms, Reagan offered the vice-presidential nomination instead to George H. W. Bush. [173] Ford did appear in a campaign commercial for the Reagan-Bush ticket, in which he declared that the country would be "better served by a Reagan presidency rather than a continuation of the weak and politically expedient policies of Jimmy Carter". [174] On October 8, 1980, Ford said former President Nixon's involvement in the general election potentially could negatively impact the Reagan campaign: "I think it would have been much more helpful if Mr. Nixon had stayed in the background during this campaign. It would have been much more beneficial to Ronald Reagan." [175] On October 3, 1980, Ford cast blame on Carter for the latter's charges of ineffectiveness on the part of the Federal Reserve Board due to his appointing of most of its members: "President Carter, when the going gets tough, will do anything to save his own political skin. This latest action by the president is cowardly." [176] Following the attempted assassination of Ronald Reagan, Ford told reporters while appearing at a fundraiser for Thomas Kean that criminals who use firearms should get the death penalty in the event someone is injured with the weapon. [177] In September 1981, Ford advised Reagan against succumbing to Wall Street demands and follow his own agenda for the economic policies of the US during an appearance on Good Morning America : "He shouldn't let the gurus of Wall Street decide what the economic future of this country is going to be. They are wrong in my opinion." [178] On October 20, 1981, Ford stated stopping the Reagan administration's Saudi arms package could have a large negative impact to American relations in the Middle East during a news conference. [179] On March 24, 1982, Ford offered an endorsement of President Reagan's economic policies while also stating the possibility of Reagan being met with a stalemate by Congress if not willing to compromise while in Washington. [180] Ford founded the annual AEI World Forum in 1982, and joined the American Enterprise Institute as a distinguished fellow. He was also awarded an honorary doctorate at Central Connecticut State University [181] on March 23, 1988. During an August 1982 fundraising reception, Ford stated his opposition to a constitutional amendment requiring the US to have a balanced budget, citing a need to elect "members of the House and Senate who will immediately when Congress convenes act more responsibly in fiscal matters." [182] Ford was a participant in the 1982 midterm elections, traveling to Tennessee in October of that year to help Republican candidates. [183] In January 1984, a letter signed by Ford and Carter and urging world leaders to extend their failed effort to end world hunger was released and sent to Secretary-General of the United Nations Javier Pérez de Cuéllar. [184] In 1987, Ford testified before the Senate Judiciary Committee in favor of District of Columbia Circuit Court judge and former Solicitor General Robert Bork after Bork was nominated by President Reagan to be an Associate Justice of the United States Supreme Court. [185] Bork's nomination was rejected by a vote of 58–42. [186] In 1987, Ford's Humor and the Presidency, a book of humorous political anecdotes, was published. By 1988, Ford was a member of several corporate boards including Commercial Credit, Nova Pharmaceutical, The Pullman Company, Tesoro Petroleum, and Tiger International, Inc. [187] Ford also became an honorary director of Citigroup, a position he held until his death. [188] In October 1990, Ford appeared in Gettysburg, Pennsylvania with Bob Hope to commemorate the centennial anniversary of the birth of former President Dwight D. Eisenhower, where the two unveiled a plaque with the signatures of each living former president. [189] In April 1991, Ford joined former presidents Richard Nixon, Ronald Reagan, and Jimmy Carter, in supporting the Brady Bill. [190] Three years later, he wrote to the U.S. House of Representatives, along with Carter and Reagan, in support of the assault weapons ban. [191] At the 1992 Republican National Convention, Ford compared the election cycle to his 1976 loss to Carter and urged attention be paid to electing a Republican Congress: "If it's change you want on Nov. 3, my friends, the place to start is not at the White House but in the United States' Capitol. Congress, as every school child knows, has the power of the purse. For nearly 40 years, Democratic majorities have held to the time-tested New Deal formula, tax and tax, spend and spend, elect and elect." (The Republicans would later win both Houses of Congress at the 1994 mid-term elections.) [192] Ford joins President Bill Clinton and former presidents George H. W. Bush and Jimmy Carter onstage at the dedication of the George H.W. Bush Presidential Library and Museum at Texas A&M University, November 6, 1997. Ford at his 90th birthday with Laura Bush, President George W. Bush, and Betty Ford in the White House State Dining Room in 2003 In April 1997, Ford joined President Bill Clinton, former President Bush, and Nancy Reagan in signing the "Summit Declaration of Commitment" in advocating for participation by private citizens in solving domestic issues within the United States. [193] On January 20, 1998, during an interview at his Palm Springs home, Ford said the Republican Party's nominee in the 2000 presidential election would lose if the party turned ultra-conservative in their ideals: "If we get way over on the hard right of the political spectrum, we will not elect a Republican President. I worry about the party going down this ultra-conservative line. We ought to learn from the Democrats: when they were running ultra-liberal candidates, they didn't win." [194] In the prelude to the impeachment of President Clinton, Ford conferred with former President Carter and the two agreed to not speak publicly on the controversy, a pact broken by Carter when answering a question from a student at Emory University. [195] In October 2001, Ford broke with conservative members of the Republican Party by stating that gay and lesbian couples "ought to be treated equally. Period." He became the highest-ranking Republican to embrace full equality for gays and lesbians, stating his belief that there should be a federal amendment outlawing anti-gay job discrimination and expressing his hope that the Republican Party would reach out to gay and lesbian voters. [196] He also was a member of the Republican Unity Coalition, which The New York Times described as "a group of prominent Republicans, including former President Gerald R. Ford, dedicated to making sexual orientation a non-issue in the Republican Party". [197] On November 22, 2004, New York Republican Governor George Pataki named Ford and the other living former Presidents (Carter, George H. W. Bush and Bill Clinton) as honorary members of the board rebuilding the World Trade Center. In a pre-recorded embargoed interview with Bob Woodward of The Washington Post in July 2004, Ford stated that he disagreed "very strongly" with the Bush administration's choice of Iraq's alleged weapons of mass destruction as justification for its decision to invade Iraq, calling it a "big mistake" unrelated to the national security of the United States and indicating that he would not have gone to war had he been president. The details of the interview were not released until after Ford's death, as he requested. [198] [199] On April 4, 1990, Ford was admitted to Eisenhower Medical Center for surgery to replace his left knee, orthopedic surgeon Robert Murphy saying, "Ford's entire left knee was replaced with an artificial joint, including portions of the adjacent femur, or thigh bone, and tibia, or leg bone." [200] Ford suffered two minor strokes at the 2000 Republican National Convention, but made a quick recovery after being admitted to Hahnemann University Hospital. [201] [202] In January 2006, he spent 11 days at the Eisenhower Medical Center near his residence at Rancho Mirage, California, for treatment of pneumonia. [203] On April 23, 2006, President George W. Bush visited Ford at his home in Rancho Mirage for a little over an hour. This was Ford's last public appearance and produced the last known public photos, video footage, and voice recording. While vacationing in Vail, Colorado, Ford was hospitalized for two days in July 2006 for shortness of breath. [204] On August 15 he was admitted to St. Mary's Hospital of the Mayo Clinic in Rochester, Minnesota, for testing and evaluation. On August 21, it was reported that he had been fitted with a pacemaker. On August 25, he underwent an angioplasty procedure at the Mayo Clinic. On August 28, Ford was released from the hospital and returned with his wife Betty to their California home. On October 13, he was scheduled to attend the dedication of a building of his namesake, the Gerald R. Ford School of Public Policy at the University of Michigan, but due to poor health and on the advice of his doctors he did not attend. The previous day, Ford had entered the Eisenhower Medical Center for undisclosed tests; he was released on October 16. [205] By November 2006, he was confined to a bed in his study. [206] Ford lying in state in the Capitol rotunda Ford died on December 26, 2006, at his home in Rancho Mirage, California, of arteriosclerotic cerebrovascular disease and diffuse arteriosclerosis. He had end-stage coronary artery disease and severe aortic stenosis and insufficiency, caused by calcific alteration of one of his heart valves. [207] At the time of his death, Ford was the longest-lived U.S. president, having lived 93 years and 165 days (45 days longer than Ronald Reagan, whose record he surpassed). [31] He died on the 34th anniversary of President Harry S. Truman's death; he was the last surviving member of the Warren Commission. [208] On December 30, 2006, Ford became the 11th U.S. president to lie in state in the Rotunda of the U.S. Capitol. [209] A state funeral and memorial services were held at the National Cathedral in Washington, D.C., on Tuesday, January 2, 2007. After the service, Ford was interred at his Presidential Museum in Grand Rapids, Michigan. [210] Scouting was so important to Ford that his family asked for Scouts to participate in his funeral. A few selected Scouts served as ushers inside the National Cathedral. About 400 Eagle Scouts were part of the funeral procession, where they formed an honor guard as the casket went by in front of the museum. [211] Ford selected the song to be played during his funeral procession at the U.S. Capitol. [212] After his death in December 2006, the University of Michigan Marching Band played the school's fight song for him one final time, for his last ride from the Gerald R. Ford Airport in Grand Rapids, Michigan. [213] The State of Michigan commissioned and submitted a statue of Ford to the National Statuary Hall Collection, replacing Zachariah Chandler. It was unveiled on May 3, 2011, in the Capitol Rotunda. On the proper right side is inscribed a quotation from a tribute by Tip O'Neill, Speaker of the House at the end of Ford's presidency: "God has been good to America, especially during difficult times. At the time of the Civil War, he gave us Abraham Lincoln. And at the time of Watergate, he gave us Gerald Ford—the right man at the right time who was able to put our nation back together again." On the proper left side are words from Ford's swearing-in address: "Our constitution works. Our great republic is a government of laws and not of men. Here the people rule."[ p ] When speaking of his mother and stepfather, Ford said that "My stepfather was a magnificent person and my mother equally wonderful. So I couldn't have written a better prescription for a superb family upbringing." [17] Ford had three half-siblings from the second marriage of Leslie King Sr., his biological father: Marjorie King (1921–1993), Leslie Henry King (1923–1976), and Patricia Jane King (1925–1980). They never saw one another as children, and he did not know them at all until 1960. Ford was not aware of his biological father until he was 17, when his parents told him about the circumstances of his birth. That year his biological father, whom Ford described as a "carefree, well-to-do man who didn't really give a damn about the hopes and dreams of his firstborn son", approached Ford while he was waiting tables in a Grand Rapids restaurant. The two "maintained a sporadic contact" until Leslie King Sr.'s death in 1941. [11] [214] The Fords on their wedding day, October 15, 1948 On October 15, 1948, Ford married Elizabeth Bloomer (1918–2011) at Grace Episcopal Church in Grand Rapids; it was his first and only marriage and her second marriage. She had previously been married and, after a five‐year marriage, divorced from William Warren. [215] Originally from Grand Rapids herself, she had lived in New York City for several years, where she worked as a John Robert Powers fashion model and a dancer in the auxiliary troupe of the Martha Graham Dance Company. At the time of their engagement, Ford was campaigning for what would be his first of 13 terms as a member of the United States House of Representatives. The wedding was delayed until shortly before the election because, as The New York Times reported in a 1974 profile of Betty Ford, "Jerry Ford was running for Congress and wasn't sure how voters might feel about his marrying a divorced exdancer." [215] The couple had four children: Michael Gerald, born in 1950, John Gardner (known as Jack) born in 1952, Steven Meigs, born in 1956, and Susan Elizabeth, born in 1957. [142] Civic and fraternal organizations Ford was a member of several civic and fraternal organizations, including the Junior Chamber of Commerce (Jaycees), American Legion, AMVETS, Benevolent and Protective Order of Elks, Sons of the Revolution, [216] Veterans of Foreign Wars, and was an alumnus of Delta Kappa Epsilon at Michigan. Ford was initiated into Freemasonry on September 30, 1949. [217] He later said in 1975, "When I took my obligation as a master mason—incidentally, with my three younger brothers—I recalled the value my own father attached to that order. But I had no idea that I would ever be added to the company of the Father of our Country and 12 other members of the order who also served as Presidents of the United States." [218] Ford was made a 33° Scottish Rite Mason on September 26, 1962. [219] In April 1975, Ford was elected by a unanimous vote Honorary Grand Master of the International Supreme Council, Order of DeMolay, a position in which he served until January 1977. [220] Ford received the degrees of York Rite Masonry (Chapter and Council degrees) in a special ceremony in the Oval Office on January 11, 1977, during his term as President of the United States. [221] Ford was also a member of the Shriners and the Royal Order of Jesters; both being affiliated bodies of Freemasonry. [222] President George W. Bush with Ford and his wife Betty on April 23, 2006 Ford is the only person to hold the presidential office without being elected as either president or vice president. The choice of Ford to fill the vacant vice-presidency was based on Ford's reputation for openness and honesty. [223] "In all the years I sat in the House, I never knew Mr. Ford to make a dishonest statement nor a statement part-true and part-false. He never attempted to shade a statement, and I never heard him utter an unkind word," said Martha Griffiths. [224] The trust the American public had in him was rapidly and severely tarnished by his pardon of Nixon. [224] Nonetheless, many grant in hindsight that he had respectably discharged with considerable dignity a great responsibility that he had not sought. [224] In spite of his athletic record and remarkable career accomplishments, Ford acquired a reputation as a clumsy, likable, and simple-minded everyman. An incident in 1975, when he tripped while exiting Air Force One in Austria, was famously and repeatedly parodied by Chevy Chase on Saturday Night Live , cementing Ford's image as a klutz. [224] [225] [226] Other pieces of the everyman image were attributed to his inevitable comparison with Nixon, his Midwestern stodginess and his self-deprecation. [223] Ford has notably been portrayed in two television productions which included a central focus on his wife: the Emmy-winning 1987 ABC biographical television movie The Betty Ford Story [227] and the 2022 Showtime television series The First Lady . [228] The following were named after Ford: ^ ^ ^ ^ a b c ^ ^ ^ ^ ^ ^ ^ a b ^ ^ ^ ^ a b ^ ^ a b c d e ^ ^ ^ ^ ^ ^ ^ ^ a b c d e f g h ^ ^ ^ ^ ^ p. 7 ^ a b c d e f ^ ^ Brinkley, p.9 ^ a b ^ ^ ^ ^ a b c ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ Celebrating the life of President Gerald R. Ford on what would have been his 96th birthday Archived April 15, 2016, at the Wayback Machine, Celebrating the life of President Gerald R. Ford on what would have been his 96th birthday Archived April 15, 2016, at the Wayback Machine, H.R. 409, 111th Congress, 1st Session (2009). ^ ^ ^ ^ ^ ^ ^ a b c d e f ^ ^ Unger, Irwin, 1996: 'The Best of Intentions: the triumphs and failures of the Great Society under Kennedy, Johnson, and Nixon': Doubleday, p. 104. ^ ^ ^ ^ ^ ^ ^ ^ “Gerald R. Ford Events Timeline,” The American Presidency Project, University of California, Santa Barbara, Gerhard Peters and John T. Woolley, last edited Feb. 2, 2021 ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ Brinkley, p. 73 ^ a b ^ ^ ^ ^ Shadow, by Bob Woodward, chapter on Gerald Ford; Woodward interviewed Ford on this matter, about twenty years after Ford left the presidency ^ ^ "Sen. Ted Kennedy crossed political paths with Grand Rapids' most prominent Republican, President Gerald R. Ford", The Grand Rapids Press , August 26, 2009. Retrieved January 5, 2010. ^ ^ ^ ^ Secretary of Transportation: William T. Coleman Jr. (1975–1977) Archived June 28, 2006, at the Wayback Machine – AmericanPresident.org (January 15, 2005). Retrieved December 31, 2006. ^ ^ Richard B. Cheney Archived September 3, 1999, at the Wayback Machine. United States Department of Defense. Retrieved December 31, 2006. ^ Bush vetoes less than most presidents, CNN, May 1, 2007. Retrieved October 19, 2007. ^ Renka, Russell D. Nixon's Fall and the Ford and Carter Interregnum. Southeast Missouri State University, (April 10, 2003). Retrieved December 31, 2006. ^ a b c d Greene, John Robert. . University Press of Kansas, 1995The Presidency of Gerald R. Ford ^ Gerald Ford Speeches: Whip Inflation Now Archived August 29, 2008, at the Wayback Machine (October 8, 1974), Miller Center of Public Affairs. Retrieved May 18, 2011 ^ Brinkley, Douglas. . New York: Times Books, 2007Gerald R. Ford ^ ^ a b Crain, Andrew Downer. . Jefferson, North Carolina: McFarland, 2009The Ford Presidency ^ ^ CRS Report RL33305, The Crude Oil Windfall Profit Tax of the 1980s: Implications for Current Energy Policy Archived February 11, 2012, at the Wayback Machine, by Salvatore Lazzari, p. 5. ^ ^ ^ ^ ^ ^ ^ Pandemic Pointers. Living on Earth, March 3, 2006. Retrieved December 31, 2006. ^ Mickle, Paul. 1976: Fear of a great plague. The Trentonian. Retrieved December 31, 2006. ^ ^ ^ ^ ^ ^ a b c ^ ^ ^ ^ ^ a b ^ ^ ^ ^ a b ^ ^ Gerald Ford, A Time to Heal, 1979, p.240 ^ Rabin, Yitzak (1996), i, University of California Press, p. 256, ISBN 978-0-520-20766-0 ^ ^ George Lenczowski, American Presidents, and the Middle East, 1990, p.150 ^ Gerald Ford, , 1979, p.298A Time to Heal ^ ^ a b c ^ ^ ^ Plummer Alston Jones (2004). " Still struggling for equality: American public library services with minorities ". Libraries Unlimited. p.84. ISBN 1-59158-243-1 ^ ^ Gawthorpe, A. J. (2009), "The Ford Administration and Security Policy in the Asia-Pacific after the Fall of Saigon", , (3):697–716.The Historical Journal52 ^ ^ ^ Gerald R. Ford, , p. 284A Time to Heal ^ Cécile Menétray-Monchau (August 2005), "The Mayaguez Incident as an Epilogue to the Vietnam War and its Reflection on the Post-Vietnam Political Equilibrium in Southeast Asia", , p. 346.Cold War History ^ ^ a b c ^ ^ "Election Is Crunch Time for U.S. Secret Service". National Geographic News. Retrieved March 2, 2008. ^ ^ ^ ^ ^ ^ ^ Letter from Gerald Ford to Michael Treanor Archived June 14, 2007, at the Wayback Machine. Fordham University, September 21, 2005. Retrieved March 2, 2008. ^ Biographical Directory of Federal Judges, a public-domain publication of the Federal Judicial Center. ^ Another Loss For the Gipper. Time, March 29, 1976. Retrieved December 31, 2006. ^ VH1 News Presents: Politics: A Pop Culture History Premiering Wednesday, October 20 at 10:00 pm (ET/PT). PRNewswire October 19, 2004. Retrieved December 31, 2006. ^ Election of 1976: A Political Outsider Prevails. C-SPAN. Retrieved December 31, 2006. ^ Shabecoff, Philip. "160,000 Mark Two 1775 Battles; Concord Protesters Jeer Ford – Reconciliation Plea", , April 20, 1975, p. 1.The New York Times ^ Shabecoff, Philip. "Ford, on Bicentennial Trip, Bids U.S. Heed Old Values", , April 19, 1975, p. 1.The New York Times ^ ^ ^ "Presidential Election 1976 States Carried". multied.com. Retrieved December 31, 2006. ^ ^ ^ Perrone, Marguerite. "Eisenhower Fellowship: A History 1953–2003". 2003. ^ ^ ^ ^ ^ [ p ] ^ ^ ^ ^ ^ Allen, Richard V. "How the Bush Dynasty Almost Wasn't" Archived July 6, 2008, at the Wayback Machine, Hoover Institution, reprinted from the New York Times Magazine, July 30, 2000. Retrieved December 31, 2006. ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ Price, Deb. "Gerald Ford: Treat gay couples equally" Archived January 20, 2013, at the Wayback Machine. The Detroit News, October 29, 2001. Retrieved December 28, 2006 ^ Stolberg, Sheryl Gay. "Vocal Gay Republicans Upsetting Conservatives", The New York Times , June 1, 2003, p. N26. ^ Woodward, Bob. "Ford Disagreed With Bush About Invading Iraq". The Washington Post, December 28, 2006. Retrieved December 28, 2006 ^ "Embargoed Interview Reveals Ford Opposed Iraq War" Archived December 28, 2006, at the Wayback Machine. Democracy Now Headlines for December 28, 2006. Retrieved December 28, 2006 ^ ^ "Gerald Ford recovering after strokes". BBC , August 2, 2000. Retrieved December 31, 2006. ^ ^ Former "President Ford, 92, hospitalized with pneumonia". USA Today , Associated Press, January 17, 2006. Retrieved October 19, 2007. ^ "Gerald Ford released from hospital". NBC News, Associated Press, July 26, 2006. Retrieved December 31, 2006. ^ ^ "Gerald Ford Dies At Age 93". CNN Transcript December 26, 2006. Retrieved March 2, 2008. ^ DeFrank T: Write It When I'm Gone, G. Putnam & Sons, New York, NY, 2007. ^ ^ ^ ^ ^ Anne E. Kornblut, "Ford Arranged His Funeral to Reflect Himself and Drew in a Former Adversary", , December 29, 2006.The New York Times ^ [ p ] ^ ^ a b ^ ^ The Supreme Council, Ancient and Accepted Scottish Rite, Southern Jurisdiction, USA. ^ ^ ^ ^ ^ ^ a b ^ a b c d ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ short biography Cannon, James. Gerald R. Ford: An Honorable Life (Ann Arbor: University of Michigan Press, 2013) 482 pp. official biography by a member of the Ford administration older full-scale biography Congressional Quarterly. President Ford: the man and his record (1974) online John Robert Greene. The Limits of Power: The Nixon and Ford Administrations. ISBN 978-0-253-32637-9. Indiana University Press, 1992. John Robert Greene. The Presidency of Gerald R. Ford. ISBN 978-0-7006-0639-9. University Press of Kansas, 1995. the major scholarly study Hersey, John Richard. The President: A Minute-By-Minute Account of a Week in the Life of Gerald Ford. New York: Alfred A. Knopf. 1975. Hult, Karen M. and Walcott, Charles E. Empowering the White House: Governance under Nixon, Ford, and Carter. University Press of Kansas, 2004. Jespersen, T. Christopher. "Kissinger, Ford, and Congress: the Very Bitter End in Vietnam". Pacific Historical Review 2002 71#3: 439–473. Online Jespersen, T. Christopher. "The Bitter End and the Lost Chance in Vietnam: Congress, the Ford Administration, and the Battle over Vietnam, 1975–76". Diplomatic History 2000 24#2: 265–293. Online latest full-scale biography Parmet, Herbert S. "Gerald R. Ford" in Henry F Graff ed., The Presidents: A Reference History (3rd ed. 2002); short scholarly overview Randolph, Sallie G. Gerald R. Ford, president (1987) online; for secondary schools Schoenebaum, Eleanora. Political Profiles: The Nixon/Ford years (1979) online, short biographies of over 500 political and national leaders. Smith, Richard Norton. An Ordinary Man: The Surprising Life and Historic Presidency of Gerald R. Ford (Harper, 2023) Williams, Daniel K. The Election of the Evangelical: Jimmy Carter, Gerald Ford, and the Presidential Contest of 1976 (University Press of Kansas, 2020) online review
3
Show HN: SWIR Platform for Rapid Microservice Creation
{{ message }} swir-rs/swir You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
2
Science Fiction: The futures it inspires us to create or to leave behind
Today, Stanisław Lem, who is regarded as one of the best science fiction writers of all time, would have turned 100. He sadly passed away 15 years ago, leaving behind a considerable collection of books and essays. While his works are interesting, they also, along with other science fiction pieces, provide inspiration - both for how we could improve the world around us and for the futures we should avoid creating. Science fiction is a genre whose history spans centuries: A True Story by the 2nd century Greek writer Lucian of Samosata is regarded by some as the first science fiction novel, or at the least one of the precursors of the genre. It includes the now familiar themes of colonizing space, traveling to distant galaxies, and alien lifeforms. It shows that our longing to explore what the world could be, both positive and negative, has always been something that has interested our ever-inquisitive species. Rarely do we come up with ideas in a vacuum. What is perhaps one of the best things about us as a species is that we are able to collaborate - and that doesn’t necessarily mean working directly together. Sometimes it’s a case of one person’s work inspiring another, one scientific discovery leading to others, or a piece of literature providing the spark for a new invention. We associate the very act of being inspired with having the feeling that we can achieve something. It’s about making something happen having seen that it may be possible. Philip K. Dick once said: “I want to write about people I love, and put them into a fictional world spun out of my own mind, not the world we actually have, because the world we actually have does not meet my standards.” So some may use science fiction as a way of creating a new, better version of the world - utopias even, such as the beautiful, egalitarian planet Earth in Star Trek - boldly looking beyond our current capabilities. Imagining other ways of doing things is often the first step towards making that idea happen. Jules Verne’s Twenty Thousand Leagues Under The Seas, for example, was purportedly the inspiration for American inventor Simon Lake’s open-water submarine; similarly, the young Igor Sikorsky, fascinated by the ideas in Verne’s novel The Clipper of the Clouds (a.k.a Robur the Conqueror), went on to invent the helicopter. Finn Brunton, author of Digital Cash: The Unknown History of the Anarchists, Utopians, and Technologists Who Created Cryptocurrency, explains that what all the creators of the concept of digital cash had in common was a shared sense of “science-fictional sensibility” - a specific attitude which came from from reading the American science fiction of the 1950s, 60s and 70s. And nowadays, in the age of Internet decentralization, sci-fi related themes abound. Vernor Vinge’s True Names had a major influence on the cypherpunk movement. Cory Doctorow’s books inspired legions of hacktivists and tinkerers working to undermine the Big Tech dominance over the Internet. In the Ethereum community you will often come across Daniel Suarez’ two part novel Daemon, where a computer program runs without human control. At the Golem Foundation, you can often sense the creative mark that science fiction has left: the name Golem itself is a nod to Golem XIV, an essay by Lem which takes the form of a lecture given by a supercomputer. Careful readers of the Wildland paper should also spot a number of references to another great Lem’s work Solaris. Julian, Golem Foundation’s CEO, regards Lem’s The Invincible and Peace on Earth - brimming with the themes of hives and miniaturization - as works which can help further decentralists’ visions. Alas, not every science fiction vision comes to fruition. Noah Smith writes that the reason we have yet to see the futures of Star Trek, The Jetsons or Asimov come to life is that we have reached the limit of both theoretical physics and energy. So perhaps some things are simply too difficult to replicate, despite the massive technological leaps we have made and our increasing understanding of physics. Perhaps some are simply unattainable according to the known rules which govern our universe. And maybe we should be grateful that some of the science fiction futures have not materialized. Of course, while some may use science fiction as a way to imagine a new, better version of the world, a utopia even, others (or indeed sometimes the same) create worlds far from perfect, even verging on dystopian. These are often equally as exciting and interesting to read, sometimes appearing to foreshadow where the path we’re taking will bring us (many cite Orwell’s 1984 as a vision of where socialism was meant to take us). They offer a set of completely novel challenges: robots going rogue, a completely new system of oppression being put in place, or humans becoming enslaved to technology. And in this sense, science fiction can also be an inspiration for what not to do. In his essay The Self-Preventing Prophecy, the American scientist and sci-fi author David Brin explains that being pessimistic about the future and offering dystopian visions can help prevent those very visions from becoming a reality. He writes: “I’m much more interested in exploring possibilities than likelihoods, because a great many more things might happen than actually do”. And as Brin puts it, it’s as much about avoiding mistakes as it is about planning for success. So science fiction is perhaps equally about showing us futures which we should avoid as it is about inspiring us to attain the futures it presents. Some, like Neal Stephenson, whose works like Snow Crash, Cryptonomicon, and The Diamond Age inspired thousands of people - us at the Golem Foundation included - to work on a more private, decentralized, and user-friendly Internet, think that nowadays science fiction writers are actually being too pessimistic. He says that “sci-fi’s greatest contribution is showing how new technologies function in a web of social and economic systems—what authors call “worldbuilding.” Michael Solana, author and vice president at the venture capital firm Founders Fund, shares this sentiment, and adds to it: dystopian visions make people fear technology, while technology is the biggest game-changer in terms of bettering our society. He writes: “We must stand up and defeat our fears. (…) It is in our power to create a brilliant world, but we must tell ourselves a story where our tools empower us to do it.” On the other hand, having so many pessimistic visions may be a good remedy for all the possibilities technology has created. The more technology makes possible in general, the more dangerous potential futures it creates. So on this day, while remembering Lem and his work, let us celebrate all the minds behind science fiction and all the futures that they have imagined: those which we would love to make happen, but perhaps never will, those which have already made our world a better place, those which will do so if the right people are inspired, and the futures which we would be better off leaving behind in the realm of fiction.
4
An Interactive Guide to CSS Transitions
The world of web animations has become a sprawling jungle of tools and technologies. Libraries like GSAP and Framer Motion and React Spring have sprung up to help us add motion to the DOM. The most fundamental and critical piece, though, is the humble CSS transition. It's the first animation tool that most front-end devs learn, and it's a workhorse. Even the most grizzled, weathered animation veterans still reach for this tool often. There's a surprising amount of depth to this topic. In this tutorial, we'll dig in and learn a bit more about CSS transitions, and how we can use them to create lush, polished animations. Link to this heading The fundamentals The main ingredient we need to create an animation is some CSS that changes. Here's an example of a button that moves on hover, without animating: Code Playground Format code using Prettier Reset code HTML <button class="btn"> Hello World </button> <style> .btn { width: 100px; height: 100px; border-radius: 50%; border: none; background: slateblue; color: white; font-size: 20px; font-weight: 500; line-height: 1; } .btn:hover { transform: translateY(-10px); } </style> < button class = " btn " > Hello World button > < style > .btn { width : 100 px ; height : 100 px ; border-radius : 50 % ; border : none ; background : slateblue ; color : white ; font-size : 20 px ; font-weight : 500 ; line-height : 1 ; } .btn :hover { transform : translateY ( -10 px ) ; } style > Result Refresh results pane This snippet uses the :hover pseudoclass to specify an additional CSS declaration when the user's mouse rests atop our button, similar to an onMouseEnter event in JavaScript. To shift the element up, we use transform: translateY(-10px). While we could have used margin-top for this, transform: translate is a better tool for the job. We'll see why later. By default, changes in CSS happen instantaneously. In the blink of an eye, our button has teleported to a new position! This is incongruous with the natural world, where things happen gradually. We can instruct the browser to interpolate from one state to another with the aptly-named transition property: Code Playground Format code using Prettier Reset code HTML CSS <button class="btn"> Hello World </button> <style> .btn { /* All of the base styles have moved to the “CSS” tab above. */ transition: transform 250ms; } .btn:hover { transform: translateY(-10px); } </style> < button class = " btn " > Hello World button > < style > .btn { transition : transform 250 ms ; } .btn :hover { transform : translateY ( -10 px ) ; } style > Result Refresh results pane transition can take a number of values, but only two are required: The name of the property we wish to animate The duration of the animation If you plan on animating multiple properties, you can pass it a comma-separated list: css Link to this heading Timing functions When we tell an element to transition from one position to another, the browser needs to work out what each "intermediary" frame should look like. For example: let's say that we're moving an element from left to right, over a 1-second duration. A smooth animation should run at 60fps * , which means we'll need to come up with 60 individual positions between the start and end. Let's start by having them be evenly-spaced: To clarify what's going on here: each faded circle represents a moment in time. As the circle moves from left to right, these are the frames that were shown to the user. It's like a flipbook. In this animation, we're using a linear timing function. This means that the element moves at a constant pace; our circle moves by the same amount each frame. There are several timing functions available to us in CSS. We can specify which one we want to use with the transition-timing-function property: css Or, we can pass it directly to the transition shorthand property: css linear is rarely the best choice — after all, pretty much nothing in the real world moves this way * . Good animations mimic the natural world, so we should pick something more organic! Let's run through our options. Link to this heading ease-out ease-out comes charging in like a wild bull, but it runs out of energy. By the end, it's pootering along like a sleepy turtle. Try scrubbing with the timeline; notice how drastic the movement is in the first few frames, and how subtle it becomes towards the end. If we were to graph the displacement of the element over time, it'd look something like this: When would you use ease-out? It's most commonly used when something is entering from off-screen (eg. a modal appearing). It produces the effect that something came hustling in from far away, and settles in front of the user. Link to this heading ease-in ease-in, unsurprisingly, is the opposite of ease-out. It starts slow and speeds up: As we saw, ease-out is useful for things that enter into view from offscreen. ease-in, naturally, is useful for the opposite: moving something beyond the bounds of the viewport. This combo is useful when something is entering and exiting the viewport, like a modal. We'll look at how to mix and match timing functions shortly. Note that ease-in is pretty much exclusively useful for animations that end with the element offscreen or invisible; otherwise, the sudden stop can be jarring. Link to this heading ease-in-out Next up, ease-in-out. It's the combination of the previous two timing functions: This timing function is symmetrical. It has an equal amount of acceleration and deceleration. I find this curve most useful for anything that happens in a loop (eg. an element fading in and out, over and over). It's a big step-up over linear, but before you go slapping it on everything, let's look at one more option. Link to this heading ease If I had a bone to pick with the CSS language authors when it comes to transitions, it's that ease is poorly named. It isn't descriptive at all; literally all timing functions are eases of one sort or another! That nitpick aside, ease is awesome. Unlike ease-in-out, it isn't symmetrical; it features a brief ramp-up, and a lot of deceleration. ease is the default value — if you don't specify a timing function, ease gets used. Honestly, this feels right to me. ease is a great option in most cases. If an element moves, and isn't entering or exiting the viewport, ease is usually a good choice. Link to this heading Custom curves If the provided built-in options don't suit your needs, you can define your own custom easing curve, using the cubic bézier timing function! css All of the values we've seen so far are really just presets for this cubic-bezier function. It takes 4 numbers, representing 2 control points. Bézier curves are really nifty, but they're beyond the scope of this tutorial. I'll be writing more about them soon though! In the meantime, you can start creating your own Bézier timing functions using this wonderful helper from Lea Verou: Once you come up with an animation curve you're satisfied with, click “Copy” at the top and paste it into your CSS! You can also pick from this extended set of timing functions. Though beware: a few of the more outlandish options won't work in CSS. When starting out with custom Bézier curves, it can be hard to come up with a curve that feels natural. With some practice, however, this is an incredibly expressive tool. Link to this heading Animation performance Earlier, we mentioned that animations ought to run at 60fps. When we do the math, though, we realize that this means the browser only has 16.6 milliseconds to paint each frame. That's really not much time at all; for reference, it takes us about 100ms-300ms to blink! If our animation is too computationally expensive, it'll appear janky and stuttery. Frames will get dropped, as the device can't keep up. Experience this for yourself by tweaking the new "Frames per second" control: In practice, poor performance will often take the form of variable framerates, so this isn't a perfect simulation. Animation performance is a surprisingly deep and interesting area, well beyond the scope of this introductory tutorial. But let's cover the absolutely-critical, need-to-know bits: Some CSS properties are wayyy more expensive to animate than others. For example, is a very expensive property because it affects layout. When an element's height shrinks, it causes a chain reaction; all of its siblings will also need to move up, to fill the space!height Other properties, like , are somewhat expensive to animate. They don't affect layout, but they do require a fresh coat of paint on every frame, which isn't cheap.background-color Two properties — and — are very cheap to animate. If an animation currently tweaks a property like or , it can be greatly improved by moving it to (though it isn't always possible to achieve the exact same effect).transformopacitywidthlefttransform Be sure to test your animations on the lowest-end device that your site/app targets. Your development machine is likely many times faster than it. If you're interested in learning more about animation performance, I gave a talk on this subject at React Rally. It goes deep into this topic: Link to this heading Hardware Acceleration Depending on your browser and OS, you may have noticed a curious little imperfection in some of the earlier examples: Pay close attention to the letters. Notice how they appear to glitch slightly at the start and end of the transition, as if everything was locking into place? This happens because of a hand-off between the computer's CPU and GPU. Let me explain. When we animate an element using transform and opacity, the browser will sometimes try to optimize this animation. Instead of rasterizing the pixels on every frame, it transfers everything to the GPU as a texture. GPUs are very good at doing these kinds of texture-based transformations, and as a result, we get a very slick, very performant animation. This is known as “hardware acceleration”. Here's the problem: GPUs and CPUs render things slightly differently. When the CPU hands it to the GPU, and vice versa, you get a snap of things shifting slightly. We can fix this problem by adding the following CSS property: css will-change is a property that allows us to hint to the browser that we're going to animate the selected element, and that it should optimize for this case. In practice, what this means is that the browser will let the GPU handle this element all the time. No more handing-off between CPU and GPU, no more telltale “snapping into place”. will-change lets us be intentional about which elements should be hardware-accelerated. Browsers have their own inscrutable logic around this stuff, and I'd rather not leave it up to chance. There's another benefit to hardware acceleration: we can take advantage of sub-pixel rendering. Check out these two boxes. They shift down when you hover/focus them. One of them is hardware-accelerated, and the other one isn't. Code Playground Format code using Prettier Reset code HTML CSS <style> .accelerated.box { transition: transform 750ms; will-change: transform; background: slateblue; } .accelerated.box:hover, .accelerated.box:focus { transform: translateY(10px); } .janky.box { transition: margin-top 750ms; will-change: margin-top; background: deeppink; } .janky.box:hover, .janky.box:focus { margin-top: 10px; } </style> <div class="wrapper"> <button class="accelerated box"></button> <button class="janky box"></button> </div> < style > .accelerated.box { transition : transform 750 ms ; will-change : transform ; background : slateblue ; } .accelerated.box :hover , .accelerated.box :focus { transform : translateY ( 10 px ) ; } .janky.box { transition : margin-top 750 ms ; will-change : margin-top ; background : deeppink ; } .janky.box :hover , .janky.box :focus { margin-top : 10 px ; } style > < div class = " wrapper " > < button class = " accelerated box " > button > < button class = " janky box " > button > div > Result Refresh results pane It's maybe a bit subtle, depending on your device and your display, but one box moves much more smoothly than the other. Properties like margin-top can't sub-pixel-render, which means they need to round to the nearest pixel, creating a stepped, janky effect. transform, meanwhile, can smoothly shift between pixels, thanks to the GPU's anti-aliasing trickery. Link to this heading UX touches Link to this heading Action-driven motion Let's take another look at our rising “Hello World” button. As it stands, we have a "symmetrical" transition — the enter animation is the same as the exit animation: When the mouse hovers over the element, it shifts up by 10 pixels over 250ms When the mouse moves away, the element shifts down by 10 pixels over 250ms A cute little detail is to give each action its own transition settings. For hover animations, I like to make the enter animation quick and snappy, while the exit animation can be a bit more relaxed and lethargic: Code Playground Format code using Prettier Reset code HTML <button class="btn"> Hello World </button> <style> .btn { will-change: transform; transition: transform 450ms; } .btn:hover { transition: transform 125ms; transform: translateY(-10px); } </style> < button class = " btn " > Hello World button > < style > .btn { will-change : transform ; transition : transform 450 ms ; } .btn :hover { transition : transform 125 ms ; transform : translateY ( -10 px ) ; } style > CSS .btn { width: 100px; height: 100px; border-radius: 50%; border: none; background: slateblue; color: white; font-size: 20px; font-weight: 500; line-height: 1; } .btn { width : 100 px ; height : 100 px ; border-radius : 50 % ; border : none ; background : slateblue ; color : white ; font-size : 20 px ; font-weight : 500 ; line-height : 1 ; } Result Refresh results pane Another common example is modals. It can be useful for modals to enter with an ease-out animation, and to exit with a quicker ease-in animation: This is a small detail, but it speaks to a much larger idea. I believe most developers think in terms of states: for example, you might look at this situation and say that we have a “hover” state and a default state. Instead, what if we thought in terms of actions? We animate based on what the user is doing, thinking in terms of events, not states. We have a mouse-enter animation and a mouse-leave animation. Tobias Ahlin shows how this idea can create next-level semantically-meaningful animations in his blog post, Meaningfun Motion with Action-Driven Animation. Link to this heading Delays Well, we've come pretty far in our quest to become proficient with CSS transitions, but there are a couple final details to go over. Let's talk about transition delays. I believe that just about everyone has had this frustrating experience before: Image courtesy of Ben Kamens As a developer, you can probably work out why this happens: the dropdown only stays open while being hovered! As we move the mouse diagonally to select a child, our cursor dips out-of-bounds, and the menu closes. This problem can be solved in a rather elegant way without needing to reach for JS. We can use transition-delay! css transition-delay allows us to keep things status-quo for a brief interval. In this case, when the user moves their mouse outside .dropdown-wrapper, nothing happens for 300ms. If their mouse re-enters the element within that 300ms window, the transition never takes place. After 300ms elapses, the transition kicks in normally, and the dropdown fades out over 400ms. Link to this heading Doom flicker When an element is moved up or down on hover, we need to be very careful we don't accidentally introduce a "doom flicker": Warning: This GIF includes flickering motion that may potentially trigger seizures for people with photosensitive epilepsy. Reveal You may have noticed a similar effect on some of the demos on this page! The trouble occurs when the mouse is near the element's boundary. The hover effect takes the element out from under the mouse, which causes it to fall back down under the mouse, which causes the hover effect to trigger again… many times a second. How do we solve for this? The trick is to separate the trigger from the effect. Here's a quick example: Code Playground Format code using Prettier Reset code HTML CSS <button class="btn"> <span class="background"> Hello World </span> </button> <style> .background { will-change: transform; transition: transform 450ms; } .btn:hover .background { transition: transform 150ms; transform: translateY(-10px); } /* Toggle me on for a clue! */ .btn { /* outline: auto; */ } </style> < button class = " btn " > span < class = " background " > Hello World span > button > < style > .background { will-change : transform ; transition : transform 450 ms ; } .btn :hover .background { transition : transform 150 ms ; transform : translateY ( -10 px ) ; } .btn { } style > Result Refresh results pane Our <button> now has a new child, .background. This span houses all of the cosmetic styles (background color, font stuff, etc). When we mouse over the plain-jane button, it causes the child to peek out above. The button, however, is stationary. Try uncommenting the outline to see exactly what's going on! Link to this heading Respecting motion preferences When I see a well-crafted animation on the web, I react with delight and glee. People are different, though, and some folks have a very different reaction: nausea and malaise. I've written before about respecting “prefers-reduced-motion”, an OS-level setting users can toggle to express a preference for less motion. Let's apply those lessons here, by disabling animations for folks who request it: css This small tweak means that animations will resolve immediately for users who have gone into their system preferences and toggled a checkbox. As front-end developers, we have a certain responsibility to ensure that our products aren't causing harm. This is a quick step we can perform to make our sites/apps friendlier and safer. Link to this heading The bigger picture CSS transitions are fundamental, but that doesn't mean they're easy. There's a surprising amount of depth to them; even in this long-winded blog post, I've had to cut some stuff out to keep it manageable! Web animations are more important than most developers realize. A single transition here or there won't make or break an experience, but it adds up. In aggregate, well-executed animations can have a surprisingly profound effect on the overall user experience. Transitions can make an app feel "real". They can offer feedback, and communicate in a more-visceral way than copy alone. They can teach people how to use your products. They can spark joy. So, I have a confession to make: this tutorial was plucked straight from my CSS course, CSS for JavaScript Developers. If you found this tutorial helpful, you should know that this is only the tip of the iceberg! My course is designed to give you confidence with CSS. We explore how the language really works, building up a mental model you can use to solve any layout/UI challenge. It's not like any other course you've taken. It's built on the same tech stack as this blog, and so there's lots of rich interactive content, but there are also bite-sized videos, tons of exercises, and real-world-inspired projects where you can test your knowledge. There are even some mini-games! Learn more and see if you'd benefit from it at https://css-for-js.dev : Finally, no interactive lesson is complete without a Sandbox Mode! Play with all the previous settings (and a couple new ones!) and create some generative art with this open-ended widget: Last Updated November 20th, 2022 Hits
8
Cadgol – a cad-native modeling language
follow me on Twitter February 2021 October 2019 February 2019 January 2018 September 2017 October 2010 August 2010 Subscribe to this blog's feed
1
Toniebox – HackieboxNG – A custom bootloader with in memory patching
{{ message }} toniebox-reverse-engineering/hackiebox_cfw_ng You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Shall we play a game? How video games transformed AI
The Economist The Economist Skip to content
1
Apple Targets Autonomous Car for Consumers by 2024
To continue, please click the box below to let us know you're not a robot.
1
Contents of GPT-3 and GPT-Neo [pdf]
Looks like that page doesn’t exist. All pages here are built to last, and should be permanent. According to Google, the most popular pages here in 2023 are: The Memo Monthly paid email list, includes subscribers from Google AI, Microsoft, and more. GPT-3.5 + ChatGPT: An illustrated overview High-level overview of everything ChatGPT. Language models (from GPT-3 to Wudao 2.0) Visualise training datasets (from Google Patents to Wikipedia) of large AI models. How do I talk to GPT? How to interact with the latest large language models. Google Pathways A look at the most exciting family of models in 2022-2023. What’s in my AI? Contents of GPT-3 and other models. Books by GPT Books written by GPT in 2020-2022. The new irrelevance of intelligence (2020 article) Neural lace, superintelligence, and a new world. OR Use the AI link in the menu. Visualising brightness The IQ chart used in seminars around the world. Fostering intelligence in the womb The latest on music cognition, genetics and early child development… Gifted, Talented, Genius, Prodigy The difference between gifted, talented, genius, and prodigy… OR Browse the full archive of gifted articles .
1
ML Frameworks and Extensions for Scikit-Learn
Plenty of packages implement the Scikit-learn estimator API. If you’re already familiar with Scikit-learn, you’ll find the integration of these libraries pretty straightforward. With these packages, we can extend the functionality of Scikit-learn estimators, and I’ll show you how to use some of them in this article. In this section, we’ll explore libraries that can be used to process and transform data. You can use this package to map ‘DataFrame’ columns to Scikit-learn transformations. Then you can combine these columns into features. To start using the package, install ‘sklearn-pandas’ via pip. The ‘DataFrameMapper’ can be used to map pandas data frame columns into Scikit-learn transformations. Let’s see how it’s done. First, create a dummy DataFrame: data =pd.DataFrame({ 'Name':['Ken','Jeff','John','Mike','Andrew','Ann','Sylvia','Dorothy','Emily','Loyford'], 'Age':[31,52,56,12,45,50,78,85,46,135], 'Phone':[52,79,80,75,43,125,74,44,85,45], 'Uni':['One','Two','Three','One','Two','Three','One','Two','Three','One'] }) The `DataFrameMapper’ accepts a list of tuples – the first item’s name is the column name in Pandas DataFrame. The second passed item is the kind of transformation that will be applied to the column. For example, ‘LabelBinarizer’ can be applied to the ‘Uni’ column, whereas the ‘Age’ column is scaled using a ‘StandardScaler’. from sklearn_pandas import DataFrameMapper mapper = DataFrameMapper([ ('Uni', sklearn.preprocessing.LabelBinarizer()), (['Age'], sklearn.preprocessing.StandardScaler()) ]) After defining the mapper, next we use it to fit and transform the data. The `transformed_names_` attribute of the mapper can be used to show resulting names after the transformation. Passing `df_out=True` to the mapper will return your results as a Pandas DataFrame. mapper = DataFrameMapper([ ('Uni', sklearn.preprocessing.LabelBinarizer()), (['Age'], sklearn.preprocessing.StandardScaler()) ],df_out=True) This package combines n-dimensional labeled arrays from xarray with Scikit-learn tools. You can apply Scikit-learn estimators to ‘xarrays’ without losing their labels. You can also: Sklearn-xarray is basically a bridge between xarray and Scikit-learn. In order to use its functionalities, install ‘sklearn-xarray’ via pip or ‘conda’. The package has wrappers, which let you use sklearn estimators on xarray DataArrays and Datasets. To illustrate this, let’s first create a ‘DataArray’. import numpy as np import xarray as xr data = np.random.rand(16, 4) my_xarray = xr.DataArray(data) Select one transformation from Sklearn to apply to this ‘DataArray’. In this case, let’s apply the ‘StandardScaler’. from sklearn.preprocessing import StandardScaler Xt = wrap(StandardScaler()).fit_transform(X) Wrapped estimators can be used in Sklearn pipelines seamlessly. pipeline = Pipeline([ ('pca', wrap(PCA(n_components=50), reshapes='feature')), ('cls', wrap(LogisticRegression(), reshapes='feature')) ]) When fitting this pipeline, you will just pass in the DataArray. Similarly, DataArrays can be used in a cross-validated grid search. For this, you need to create a ‘CrossValidatorWrapper’ instance from ‘sklearn-xarray’. from sklearn_xarray.model_selection import CrossValidatorWrapper from sklearn.model_selection import GridSearchCV, KFold cv = CrossValidatorWrapper(KFold()) pipeline = Pipeline([ ('pca', wrap(PCA(), reshapes='feature')), ('cls', wrap(LogisticRegression(), reshapes='feature')) ]) gridsearch = GridSearchCV( pipeline, cv=cv, param_grid={'pca__n_components': [20, 40, 60]} ) After that, you will fit the ‘gridsearch’ to X and y in the ‘DataArray’ data type. Are there tools and libraries that integrate Sklearn for better Auto-ML? Yes there are, and here are some examples. With this, you can perform automated machine learning with Scikit-learn. For the setup you need to install some dependencies manually. $ curl https://raw.githubusercontent.com/automl/auto-sklearn/master/requirements.txt | xargs -n 1 -L 1 pip install Next, install ‘auto-sklearn’ via pip. When using this tool, you don’t need to worry about algorithm selection and hyper-parameter tuning. Auto-sklearn does all that for you. It does this thanks to the latest advances in Bayesian optimization, meta-learning, and ensemble construction. To use it, you need to select a classifier or regressor, and fit it to the training set. from autosklearn.classification import AutoSklearnClassifier cls = AutoSklearnClassifier() cls.fit(X_train, y_train) predictions = cls.predict(X_test) Given a certain dataset, Auto_ViML tries out different models with varying features. It eventually settles on the best performing model. The package also selects the least number of features possible in building the model. This gives you a less complex and interpretable model. This package also: To see it in action, install ‘autoviml’ via pip. from sklearn.model_selection import train_test_split, cross_validate X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=54) train, test = X_train.join(y_train), X_val.join(y_val) model, features, train, test = Auto_ViML(train,"target",test,verbose=2) This is a Python-based auto-ml tool. It uses genetic programming to optimize machine learning pipelines. It explores multiple pipelines in order to settle on the best one for your dataset. Install ‘tpot’ via pip to start tinkering with it. After running ‘tpot’, you can save the resulting pipeline in a file. The file will be exported once the exploration process is completed or when you terminate the process. The snippet below shows how you can create a classification pipeline on the digits dataset. from tpot import TPOTClassifier from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split digits = load_digits() X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, train_size=0.75, test_size=0.25, random_state=42) tpot = TPOTClassifier(generations=5, population_size=50, verbosity=2, random_state=42) tpot.fit(X_train, y_train) print(tpot.score(X_test, y_test)) tpot.export('tpot_digits_pipeline.py') This is a tool for automated feature engineering. It works by transforming temporal and relational datasets into feature matrices. Install ‘featuretools[complete]’ via pip to start using it. Deep Feature Synthesis (DFS) can be used for automated feature engineering. First, you define a dictionary containing all entities in a dataset. In ‘featuretools’, an entity is a single table. After that, the relationship between the different entities is defined. The next step is to pass the entities, list of relationships, and the target entity to DFS. This will get you the feature matrix and the corresponding list of feature definitions. import featuretools as ft entities = { "customers" : (customers_df, "customer_id"), "sessions" : (sessions_df, "session_id", "session_start"), "transactions" : (transactions_df, "transaction_id", "transaction_time") } relationships = [("sessions", "session_id", "transactions", "session_id"), ("customers", "customer_id", "sessions", "customer_id")] feature_matrix, features_defs = ft.dfs(entities=entities, relationships = relationships, target_entity = "customers") You can use Neuraxle for hyperparameter tuning and AutoML. Install ‘neuraxle’ via pip to start using it. Apart from Scikit-learn, Neuraxle is also compatible with Keras, TensorFlow, and PyTorch. It also has: To do auto-ml with Neuraxle, you need: Check out a complete example here. Now it’s time for a couple of SciKit tools that you can use for machine learning experimentation. 15 Best Tools for Tracking Machine Learning Experiments SciKit-Learn Laboratory is a command-line tool you can use to run machine learning experiments. To start using it, install `skll` via pip. After that, you need to obtain a dataset in the `SKLL` format. Next, create a configuration file for the experiment, and run the experiment in the terminal. When the experiment is complete, multiple files will be stored in the results folder. You can use these files to examine the experiment. The Neptune’s integration with Scikit-learn lets you log your experiments using Neptune. For instance, you can log the summary of your Scikit-learn regressor. from neptunecontrib.monitoring.sklearn import log_regressor_summary log_regressor_summary(rfr, X_train, X_test, y_train, y_test) Check out this notebook for the complete example. Let’s now shift gears, and look at SciKit libraries that are focused on model selection and optimization. The Ultimate Guide to Evaluation and Selection of Models in Machine Learning This library implements methods for sequential model-based optimization. Install ‘scikit-optimize’ via pip to start using these functions. Scikit-optimize can be used to perform hyper-parameter tuning via Bayesian optimization based on the Bayes theorem. You use ‘BayesSearchCV’ to obtain the best parameters using this theorem. A Scikit-learn model is passed to it as the first argument. from skopt.space import Real, Categorical, Integer from skopt import BayesSearchCV regressor = BayesSearchCV( GradientBoostingRegressor(), { 'learning_rate': Real(0.1,0.3), 'loss': Categorical(['lad','ls','huber','quantile']), 'max_depth': Integer(3,6), }, n_iter=32, random_state=0, verbose=1, cv=5,n_jobs=-1, ) regressor.fit(X_train,y_train) After fitting, you can get the best parameters of the model via the ‘best_params_’ attribute. Sklearn-deap is a package used to implement evolutionary algorithms. It reduces the time you need to find the best parameters for the model. It doesn’t try out every possible combination, but only evolves the combination that results in the best performance. Install ‘sklearn-deap’ via pip. from evolutionary_search import EvolutionaryAlgorithmSearchCV cv = EvolutionaryAlgorithmSearchCV(estimator=SVC(), params=paramgrid, scoring="accuracy", cv=StratifiedKFold(n_splits=4), verbose=1, population_size=50, gene_mutation_prob=0.10, gene_crossover_prob=0.5, tournament_size=3, generations_number=5, n_jobs=4) cv.fit(X, y) Moving on, let’s now look at Scikit tools that you can use to export your models for production. sklearn-onnx enables the conversion of Sklearn models to ONNX. To use it, you need to get ‘skl2onnx’ via pip. Once your pipeline is ready, you can use the ‘to_onnx’ function to convert the model to ONNX. from skl2onnx import to_onnx onx = to_onnx(pipeline, X_train[:1].astype(numpy.float32)) This is a model compiler for decision tree ensembles. It handles various tree-based models, such as random forests and gradient boosted trees. You can use it to import Scikit-learn models. Here, ‘model’ is a scikit-learn model object. import treelite.sklearn model = treelite.sklearn.import_model(model) In this section, let’s look at libraries that can be used for model visualization and inspection. The Best Tools to Visualize Metrics and Hyperparameters of Machine Learning Experiments dtreeviz is used for decision tree visualization and model interpretation. from dtreeviz.trees import dtreeviz viz = dtreeviz( model, X_train, y_train, feature_names=boston.feature_names, fontname="Arial", title_fontsize=16, colors = {"title":"red"} ) eli5 is a package that can be used for debugging and inspecting machine learning classifiers. You can also use it to explain their predictions. For example, an explanation of Scikit-learn estimator weights can be shown as follows: dabl provides boilerplate code for common machine learning tasks. It’s still in active development, so it’s not recommended for production systems. import dabl from sklearn.model_selection import train_test_split from sklearn.datasets import load_digits X, y = load_digits(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) sc = dabl.SimpleClassifier().fit(X_train, y_train) print("Accuracy score", sc.score(X_test, y_test)) Skorch is a Scikit-learn wrapper for PyTorch. It lets you use PyTorch in Scikit-learn. It supports numerous data types, like PyTorch Tensors, NumPy arrays and Python dictionaries. from skorch import NeuralNetClassifier net = NeuralNetClassifier( MyModule, max_epochs=10, lr=0.1, iterator_train__shuffle=True, ) net.fit(X, y) In this article, we explored some of the popular tools and libraries that extend the Scikit-learn ecosystem. As you see, these tools can be used to: Try out these packages in your Scikit-learn workflow, and you might be surprised how convenient they are.
198
Loading CSV File at the Speed Limit of the NVMe Storage
I plan to write a series of articles to discuss some simple but not embarrassingly parallel algorithms. These will have practical usages and would most likely be on many-core CPUs or CUDA GPUs. Today’s is the first one to discuss a parallel algorithm implementation for CSV file parser. In the old days, when our spin disk speed maxed out at 100MiB/s, we only have two choices: either we don’t care about the file loading time at all, treating it as a cost of life, or we have a file format entangled with the underlying memory representation to squeeze out the last bits of performance for data loading. That world has long gone. My current workstation uses a software RAID0 (mdadm) over two 1TB Samsung 970 EVO NVMe storage for data storage. This setup usually gives me around 2GiB/s read / write speed (you can read more about my workstation here). The CSV file format is firmly in the former category of the two. The thing that people who exchange CSV files care most, above anything else, is the interoperability. Serious people who actually care about speed and efficiency moved to other formats such as Apache Parquet or Apache Arrow. But CSV files live on. It is still by far the most common format in Kaggle contests. There exist many implementations for CSV file parsers. Among them, csv2 and Vince’s CSV Parser would be two common implementations. That doesn’t account for standard implementations such as the one from Python. Most of these implementations shy away from utilizing many-cores. It is a reasonable choice. In many likely scenarios, you would load many small-ish CSV files, and these can be done in parallel at task-level. That is an OK choice until recently, when I have to deal with some many GiBs CSV files. These files can take many seconds to load, even from tmpfs. That indicates a performance bottleneck at CPU parsing time. The most obvious way to overcome the CPU parsing bottleneck is to fully utilize the 32 cores of Threadripper 3970x. This can be embarrassingly simple if we can reliably breakdown the parsing by rows. Unfortunately, RFC 4180 prevents us from simply using line breaks as row delimiters. Particularly, when quoted, a cell content can contain line breaks and these will not be recognized as row delimiters. Paratext first implemented a two-pass approach for parallel CSV parsing. Later it is documented in Speculative Distributed CSV Data Parsing for Big Data Analytics . The paper discussed, besides the two-pass approach, a more sophisticated speculative approach that is suitable for the higher-latency distributed environment. In the past few days, I implemented a variant of the two-pass approach that can max out the NVMe storage bandwidth. It is an interesting journey as I didn’t write any serious parser in C for a very long time. CSV file represents simple tabular data with rows and columns. Thus, to parse a CSV file, it is meant to divide a text file into cells that can be uniquely identified with row and column index. In C++, this can be done in zero-copy fashion with string_view. In C, every string has to be null-terminated. Thus, you need to either manipulate the original buffer, or copy it over. I elected the latter. To simplify the parser implementation, it is assumed we are given a block of memory that is the content of the CSV file. This can be done in C with: 123456 FILE * file = fopen ( "file path" , "r" ); const int fd = fileno ( file ); fseek ( file , 0 , SEEK_END ); const size_t file_size = ftell ( file ); fseek ( file , 0 , SEEK_SET ); void * const data = mmap (( caddr_t ) 0 , file_size , PROT_READ , MAP_SHARED , fd , 0 ); We are going to use OpenMP’s parallel for-loop to implement the core algorithm. Nowadays, Clang has pretty comprehensive support for OpenMP. But nevertheless, we will only use the very trivial part of what OpenMP provides. To parallel parse a CSV file, we first need to break it down into chunks. We can divide the file into 1MiB sequence of bytes as our chunks. Within each chunk, we can start to find the right line breaks. The double-quote in RFC 4180 can quote a line break, that makes us find the right line breaks harder. But at the same time, the RFC defines the way to escape double-quote by using two double-quote back-to-back. With this, if we count double-quotes from the beginning of a file, we know that a line break is within a quoted cell if we encounter an odd number of double-quotes so far. If we encounter an even number of double-quotes before a line break, we know that is a beginning of a new row. We can count double-quotes from the beginning of each chunk. However, because we don’t know if there are an odd or even number of double-quotes before this chunk, we cannot differentiate whether a line break is the starting point of a new row, or just within a quoted cell. What we do know, though, is that a line break after an odd number of double-quotes within a chunk is the same class of line breaks. We simply don’t know at that point which class that is. We can count these two classes separately. A code excerpt would look like this: 1234567891011121314151617181920212223242526272829303132333435363738 #define CSV_QUOTE_BR(c, n) \ do { \ if (c##n == quote) \ ++quotes; \ else if (c##n == '\n') { \ ++count[quotes & 1]; \ if (starter[quotes & 1] == -1) \ starter[quotes & 1] = (int)(p - p_start) + n; \ } \ } while (0) parallel_for ( i , aligned_chunks ) { const uint64_t * pd = ( const uint64_t * )( data + i * chunk_size ); const char * const p_start = ( const char * ) pd ; const uint64_t * const pd_end = pd + chunk_size / sizeof ( uint64_t ); int quotes = 0 ; int starter [ 2 ] = { - 1 , - 1 }; int count [ 2 ] = { 0 , 0 }; for (; pd < pd_end ; pd ++ ) { // Load 8-bytes at batch. const char * const p = ( const char * ) pd ; char c0 , c1 , c2 , c3 , c4 , c5 , c6 , c7 ; c0 = p [ 0 ], c1 = p [ 1 ], c2 = p [ 2 ], c3 = p [ 3 ], c4 = p [ 4 ], c5 = p [ 5 ], c6 = p [ 6 ], c7 = p [ 7 ]; CSV_QUOTE_BR ( c , 0 ); CSV_QUOTE_BR ( c , 1 ); CSV_QUOTE_BR ( c , 2 ); CSV_QUOTE_BR ( c , 3 ); CSV_QUOTE_BR ( c , 4 ); CSV_QUOTE_BR ( c , 5 ); CSV_QUOTE_BR ( c , 6 ); CSV_QUOTE_BR ( c , 7 ); } crlf [ i ]. even = count [ 0 ]; crlf [ i ]. odd = count [ 1 ]; crlf [ i ]. even_starter = starter [ 0 ]; crlf [ i ]. odd_starter = starter [ 1 ]; crlf [ i ]. quotes = quotes ; } parallel_endfor This is our first pass. After the first pass, we can sequentially go through each chunk’s statistics to calculate how many rows and columns in the given CSV file. The line breaks in the first chunk after even number of double-quotes would be the number of rows in the first chunk. Because we know the number of double-quotes in the first chunk, we now know what class of line breaks in the second chunk are the start points of a row. The sum of these line breaks would be the number of rows. For the number of columns, we can go through the first row and count the number of column delimiters outside of double-quotes. The second pass will copy the chunks over, null-terminate each cell, escape the double-quotes if possible. We can piggyback our logic on top of the chunks allocated for the first pass. However, unlike the first pass, the parsing logic doesn’t start at the very beginning of each chunk. It starts at the first starting point of a row in that chunk and ends at the first starting point of a row in the next chunk. The second pass turns out to occupy the most of our parsing time, simply because it does most of the string manipulations and copying in this pass. Both the first pass and second pass unrolled into 8-byte batch parsing, rather than per-byte parsing. For the second pass, we did some bit-twiddling to quickly check whether there are delimiters, double-quotes, or line breaks that needed to be processed, or we can simply copy it over. 12345 const uint64_t delim_mask = ( uint64_t ) 0x0101010101010101 * ( uint64_t ) delim ; const uint64_t delim_v = v ^ delim_mask ; if (( delim_v - ( uint64_t ) 0x0101010101010101 ) & (( ~ delim_v ) & ( uint64_t ) 0x8080808080808080 )) { // Has delimiters. } You can find more discussions about this kind of bit-twiddling logic here. The complete implementation is available at ccv_cnnp_dataframe_csv.c. The implementation was compared against csv2, Vince’s CSV Parser and Paratext. The workstation uses AMD Threadripper 3970x, with 128GiB memory running at 2666MHz. It has 2 Samsung 1TB 970 EVO with mdadm-based RAID0. For csv2, I compiled csv2/benchmark/main.cpp with: 1 g++ -I../include -O3 -std =c++11 -o main main.cpp For Vince’s CSV Parser, I compiled csv-parser/programs/csv_bench.cpp with: 1 g++ -I../single_include -O3 -std =c++17 -o csv_bench csv_bench.cpp -lpthread Paratext hasn’t been actively developed for the past 2 years. I built it after patched paratext/python/paratext/core.py by removing the splitunc method. The simple benchmark Python script look like this: 1234 import paratext import sys dict_frame = paratext . load_raw_csv ( sys . argv [ 1 ], allow_quoted_newlines = True ) I choose the DOHUI NOH dataset, which contains a 16GiB CSV file with 496,782 rows and 3213 columns. First, to test the raw performance, I moved the downloaded file to /tmp, which is mounted as in-memory tmpfs. The above performance accounts for the best you can get if file IO is not a concern. With the said 970 EVO RAID0, we can run another round of benchmark against the real disk IO. Note that for this round of benchmark, we need to drop system file cache with: sudo bash -c "echo 3 > /proc/sys/vm/drop_caches" before each run. The performance of our parser approaches 2000MiB/s, not bad! csv2 is single-threaded. With only one thread, would our implementation still be reasonable? I moved parallel_for back to serial for-loop and ran the experiment on tmpfs again. It is about 2x slower than csv2. This is expected because we need to null-terminate strings and copy them to a new buffer. You cannot really ship a parser in C without doing fuzzing. Luckily, in the past few years, it is incredibly easy to write a fuzz program in C. This time, I chose LLVM’s libFuzzer and turned on AddressSanitizer along the way. The fuzz program is very concise: 12345678910111213141516171819202122232425262728293031323334353637383940 #include #include #include #include #include #include #include int LLVMFuzzerInitialize ( int * argc , char *** argv ) { ccv_nnc_init (); return 0 ; } int LLVMFuzzerTestOneInput ( const uint8_t * data , size_t size ) { if ( size == 0 ) return 0 ; int column_size = 0 ; ccv_cnnp_dataframe_t * dataframe = ccv_cnnp_dataframe_from_csv_new (( void * ) data , CCV_CNNP_DATAFRAME_CSV_MEMORY , size , ',' , '"' , 0 , & column_size ); if ( dataframe ) { if ( column_size > 0 ) // Iterate through the first column. { ccv_cnnp_dataframe_iter_t * const iter = ccv_cnnp_dataframe_iter_new ( dataframe , COLUMN_ID_LIST ( 0 )); const int row_count = ccv_cnnp_dataframe_row_count ( dataframe ); int i ; size_t total = 0 ; for ( i = 0 ; i < row_count ; i ++ ) { void * data = 0 ; ccv_cnnp_dataframe_iter_next ( iter , & data , 1 , 0 ); total += strlen ( data ); } ccv_cnnp_dataframe_iter_free ( iter ); } ccv_cnnp_dataframe_free ( dataframe ); } return 0 ; } ./csv_fuzz -runs=10000000 -max_len=2097152 took about 2 hours to finish. I fixed a few issues before the final successful run. With many-core systems becoming increasingly common, we should expect more programs to use these cores at the level traditionally considered for single core such as parsing. It doesn’t necessarily need to be hard either! With good OpenMP support, some simple tuning on algorithm-side, we can easily take advantage of improved hardware to get more things done. I am excited to share more of my journey into parallel algorithms on modern GPUs next. Stay tuned! There are some more discussions in lobste.rs and news.ycombinator. After these discussions, I benchmarked xsv for this particular csv file and implemented zero-copy parsing (effectively pushing more processing to iteration time) on my side. The zero-copy implementation becomes trickier than I initially liked because expanding from pointer to pointer + length has memory usage implications and can be slower if implemented naively for the later case. Here are some results for parsing from NVMe storage: If runs on single-core, the parallel parser was penalized by the two-pass approach. Particularly:
3
Quote Investigator: “It’s Easier to Ask Forgiveness Than to Get Permission”
Grace Hopper? Cardinal Barberini? Earl of Peterborough? David Hernandez? Helen Pajama? St. Benedict? Anonymous? Dear Quote Investigator: People who are eager to initiate a task often cite the following guidance. Here are two versions: This notion has been credited to Grace Murray Hopper who was a U.S. Navy Rear Admiral and pioneering computer scientist. Would you please explore this saying? Quote Investigator: Grace Hopper did employ and help to popularize the expression by 1982, but it was already in circulation. The earliest match located by QI appeared in 1846 within a multivolume work called “Lives of the Queens of England” by Agnes Strickland. The ninth volume discussed marriage advice offered by a powerful church official. Emphasis added to excerpts by QI: But, in truth, the cardinal Barberini … did frankly advise the duchess of Modena to conclude the marriage at once; it being less difficult to obtain forgiveness for it after it was done, than permission for doing it. A footnote listed the source of the passage above as “Earl of Peterborough, in the Mordaunt Genealogies”. Strictly speaking, the statement was not presented as a proverb; instead, it was guidance tailored to one particular circumstance. In 1894 a newspaper in Pittsburgh, Pennsylvania printed a thematically related adage within a story about mischievous children: The boys, let me add, every one had respectable parents and who would not, for an instant, have allowed such a prank had they known of its existence; but it is easier to beg forgiveness after the deed is performed. Another match occurred in the 1903 novel “A Professional Rider” by Mrs. Edward Kennard, but the form was not proverbial: Once married, it would be infinitely easier to ask her father’s forgiveness, than to beg his permission beforehand. In 1966 “Southern Education Report” printed an instance of the proverb spoken by David Hernandez who was a project director working for the U.S. government program Head Start: Hernandez began advertising for bids on the mobile classrooms even before the money to pay for them had been approved. ‘It’s easier to get forgiveness than permission,’ he explained. The above citation appeared in “The Dictionary of Modern Proverbs” from Yale University Press. Below are additional selected citations in chronological order. In 1970 “A Door Will Open” by Helen Pajama printed the adage. The book presented the story of big band trumpet player Randy Brooks who spoke extensively to Pajama: Sometimes it’s easier to be forgiven than to get permission. So, we made the move and then notified those concerned where he would be. There wasn’t much criticism to my knowledge. In February 1971 “The Reader’s Digest” printed an anecdote in a section called “Humor in Uniform” that included a remark attributed to an anonymous military man: . . . “Soldier, don’t you know you are not supposed to be using this vehicle for such a purpose?” Taking a nervous gulp, the young GI replied, “Yes, sir. But sometimes forgiveness is easier to obtain than permission.” Capt. David L. Benton III (Fort Sill, Okla.) In September 1971 a newspaper in Rochester, New York reported the observation of an unhappy government planner: Town Planning Board Chairman Harry Ewens, a guest at yesterday’s luncheon, echoed Strong’s sentiments and complained that he’s found people unwilling to obtain building permits before beginning construction. “They’ve found it a lot easier to get forgiveness than to get permission,” Ewens said. In 1978 the adage was attributed to St. Benedict by an environmental activist who was quoted in a St. Louis, Missouri newspaper: David D. Comey, head of Citizens for a Better Environment, a Chicago-based environmental group, said the process illustrates the wisdom of St. Benedict. It was Benedict who observed more than 1400 years ago that “It is easier to beg forgiveness than to seek permission.” In 1980 “Murphy’s Law Book Two” compiled by Arthur Bloch contained the following entry: STEWART’S LAW OF RETROACTION: It is easier to get forgiveness than permission. In 1982 the “Chicago Tribune” of Illinois reported on a speech delivered by Grace Hopper at Lake Forest College which included the following: “Always remember that it’s much easier to apologize than to get permission,” she said. “In this world of computers, the best thing to do is to do it.” In conclusion, this article presents a snapshot of current research. The proverbial form was in use by 1966 as shown by the statement from David Hernandez. QI hypothesizes that it was circulating before that date and was anonymous. (Great thanks to Claudia, William Flesch, and Nick Rollins whose inquiries led QI to formulate this question and perform this exploration. Some citations were previously shared by QI in 2010 on the Freakonomics website. Thanks to Barry Popik, Bill Mullins, Fred Shapiro, and Herb Stahlke for their research and comments. Further thanks to Stephen Goranson for finding the 1903 citation and for verifying the February 1971 citation.)
1
Plant trees while searching the web – More than 100M trees planted
Plant trees while searching the web – more than 100 million trees already planted Ecosia is a search engine that uses 80% of their revenue to plant trees. Simply use their search engine a few times a day to make an impact. Read more below. Chinese proverb: When is it the best time to plant a tree? 20 years ago. Second best time, today! This is also true with Ecosia. If you would have used the search engine for a couple of years, you would have helped plant thousands of trees. Too bad we didn’t, so let’s start today. There is no better day to start. Ecosia has planted more than 100 million trees. That’s a big milestone, showing how great sustainable tech can help the world with the help of people. Related: Top Solutions to our Plastic Pollution Problem . Ecosia uses 80% of their ads revenue to plant trees. We as users can simply go to Ecosia.org instead of Google.com or Bing.com to perform our searches. That’s all we need to do to have an impact. No signup, nothing. If you’d like to take it a step further, you can: sign up with Ecosia and track how many trees you’ve planted over the course of months/years. set Ecosia as your default search engine. Ecosia is a great example of creating a product that can be seamlessly integrated into people’s lives to create make an impact. Side note: We do find Google gives better search results. So we do need to search for Google in Ecosia. But that’s perfectly fine with us. The pleasure of doing good every single day outweighs the occasional extra search query. Here’s a nice little video. Gotta love the accent. Do you Love, Like or Dislike Ecosia? Let us know in the comments below! More ways to do good! The Social Enterprise turning Plastic Waste into Currency p Top Vegan Jokes and Memes that wil make you LOL What are the best Reusable Grocery Bags We search the web to find simple ways to do good. Subscribe to get notifications in your inbox. You have a story to tell. We want to help. Let’s create memorable content and reach tens of thousands of people.
1
What should we teach the next Zuck?
Andrew Murray Dunn p Follow 12 min read p Mar 2, 2021 -- 5 Listen Share My intention in writing this piece is to spark conversation around an important meta-question, propose a personal development curriculum for technologists, and connect with those who are ready to jump into the mystery of this work. I wrote this on land that has long been stewarded by the Karuk people . What Makes a Humane Technologist? Max Loeffler It’s a critical question that deserves more attention if we want to make good on the promises of the movement. So far we have coherence around things like design principles (humane tech is sensitive to human nature, narrows the gap between the powerful and the marginalized, reduces hatred and greed, and so on). But what about the people creating the technology? How’s their understanding of human nature? What’s their analysis on systemic marginalization? Are their choices free of hate and greed? What is the ideal anatomy of the type of person we want to trust and empower to build in this new way? How does such great transformation happen? As a pioneer in the field, co-founding Digital Wellness Collective and the late Siempo: a humane tech interface (postmortem draft here), I have a few years of experience living these questions. I believe we are sorely lacking in the education department, and thus have a beautiful and historic opportunity to develop alternative pathways to the dominant and inhumane YC / VC / MBA culture. Key Takeaways: • We don’t get humane technology without humane technologists. • We need to develop new curricula, pedagogy & communities of practice that empower humane technologists with the capacities, awarenesses, values and experiences that are essential to create in a more life-honoring way. • The path of the humane technologist offers a high ROI in terms of richness of life. How Are We Doing At Nurturing The Next Generation of Technologists? Cast of the HBO show Silicon Valley “Human development and education are often the elephant in the room when it comes to calls for system-level change.” — Zak Stein Despite the overwhelming success of The Social Dilemma and a myriad of organizing efforts by The Center for Humane Technology and grassroots supporters, Silicon Valley continues marching to the destructive beat of “move fast and break things.” High achieving youth are peak stoked about a career path that will both make them rich AND have the biggest impact. The game is a race through an outdated and monocultural startup education towards unsustainable ideas of success, taking cues from role models who have proven profoundly inadequate to meet the complex challenges of our time (Mark, Steve, Jeff, Travis, Adam, Evan etc. — and we can thank these teachers for inspiring a different way forward). Grind culture, toxic individualism, and other distortions of the capitalist mindset prevent most founders from slowing down to reflect on the inner dimensions of self or develop a critical lens. CEO of OpenAI and the former president of Y Combinator believes we should go even faster 🤔 There is a lot to critique about this culture. I believe slowness is important medicine for Silicon Valley. Much has been written on how to infuse tech and business with mindfulness and spirituality, or sustainability and justice. We hear calls for a more compassionate, high integrity, heart-centered technologist, through scores of books, TED talks, and retreats focused on making that one key tweak. Teach ethics to CIS students! Invest in Diversity Equity & Inclusion trainings! Try that hot new plant medicine! Humane technology is more multidimensional than that. But there is no source of holistic education reaching technologists that goes the depth required to empower them to “create technology worthy of the human spirit.” No comprehensive training that would help founders and developers know themselves better and cultivate the skills to contribute to a thriving future. Exercise: Let’s pretend you’re God/dess 🙏🏽 Take a deep breath. Your biggest of the day 💨 You are designing a young entrepreneur. Somebody who is going to build a new technology that influences the lives of many people. Eliash Strongowski What would you have this person study in school? Who would you introduce as role models for them? What initiations or challenging experiences would you have them face? What skills would you have them learn? What questions would you have them sit with? Why? These questions beg more attention. I’d love to continuously entertain them by crowdsourcing perspectives and creating space for conversation, so we can bubble up the educational material and praxis that is most effective, accessible, and non-dogmatic. The adoption of humane tech principles is going to take time, and even if the next generation of technologists has them down, how much of a difference will it make if they are still racing towards old ideas of personal success, don’t know who they are, and lack a critical lens? It’s time for us to have a serious conversation about what influences ought to influence those who are gaining influence. About what type of person we want to grant such immense power to, amidst an unprecedented situation in which young people are coming into power at accelerating rates and dumping the amplified waste of their limited consciousness onto the rest of us. Obtaining the power of Gods, without the love or wisdom of Gods. I don’t believe we have many role model humane technologists or humane tech companies to point to and lift up today. They are aspirational archetypes, the embodiment of which require a radical slowing down, unlearning and relearning. This is a problem, since you can’t be what you can’t see. And it’s also our great invitation. As the sociopolitical events of recent years highlight the dire need need to transform systems of extraction and oppression, more fingers are starting to point to the person in the mirror. Proposing New Curricula What is an alternative to the unicorn mindset? Zebras Unite offers a more ethical and inclusive alternative to unicorn culture that is killing us. Similarly, I don’t believe we get Zebra companies without Zebra people. So what would be on the syllabus? What are the markers or badges of a humane technologist? Through my challenging tenure in the Uncanny Valley and subsequently enriching experiences in the emerging humane technology field, I’ve found myself deep in inquiry around the aforementioned questions, gravitating towards the cultural wilderness for clues: attending gatherings of indigenous wisdom keepers, living in an East Bay activist home, supporting a metamodern political movement, studying at an awakening school.. With Siempo I realized I had a context and permission to really “live the mission” of creating technology that protects and promotes human flourishing. I interpreted that by doubling down on working on myself, and focusing as much on ‘the how’ of our innovation as ‘the what.’ The natural evolution of that inquiry is to share some of what I’ve been exposed to with those ready to dive in. To be sure, I am a new student too, teaching what I need to learn. I couldn’t possibly have the answers. Here’s my stab at outlining a syllabus that reflects my unconscious attempt at creating a self-directed grad school between 2016 and 2020: Please fill out this form if you are interested in engaging with or supporting a curriculum like this, or have feedback and ideas to share! I am in the process of prototyping content and experiences throughout 2021. If this were to be developed and offered as a transdisciplinary course, the goal would be to provide aspiring humane technologists with an introduction to relevant domains, experiential exercises, and curated resources to support deeper dives. Maps. Onramps for a long-term learning journey. No “one and done” certifications here. The complexity of our times demand a multi-decade continuing education investment for those who want to walk the path. ‘Success’ at the end of such an intro course might look like each student: a) Articulating a long-term learning journey of their most relevant questions to explore and inner work to engage with b) Creating a short-term action plan for progressing on that journey in a balanced, embodied and supported fashion c) Making a public pledge for how they plan on showing up to this work in themselves, creations and organizations (a la North Star Ethics Pledge) The landscape is so incredibly vast, it’s almost ludicrous to attempt to distill into a neat package, let alone demand professionals opt in to the massive endeavour. But there is value in providing some frame of reference to help those serious humane technologists to locate themselves, re-politicize their context and orient their personal development journey towards wholeness and harmony with life on Earth. We will need to experiment for the unique needs of industries and geographies, types of technologist and stages of organization. Curriculum and pedagogy must be informed by the latest philosophy and science of learning and human development, and be delivered in a manner that is most experiential, interpersonal, emotional and embodied. I also imagine communities of practice developing for folks to practice this new way of creating together, since we all have things to learn from each other and require accountability structures to actually do the work. I want to give a special shout out the crew at JumpScale (one of the organizations I’ve worked with and currently advise) for seeding some of the inspiration for a curriculum that takes an honest look at all relevant dimensions of one’s life and creations. Their Good Clean Well philosophy supports the health and resilience of leaders and teams in achieving long term goals. I like to think of their work as that of a “business healer” or psychotherapist who goes in, finds where the energy is stuck, and helps move it in a way that cascades benefits to all stakeholders. Indicators of Wellbeing that JumpScale analyzes in developing a treatment plan. The curriculum I’m proposing is a more technologist-focused version, with a special focus on idea and early stage founders. At the end of the day, we need to set the bar higher for a new generation of technologists who are more well rounded, balanced and responsible than the average person working in tech today. We need humane technologists who have reverence for all life, rather than just seek to profit from it. Who know how to move slow and pay attention, rather than just move fast and break things. Who are willing to challenge underlying assumptions, rather than just follow the conventional wisdom. Who are in right relationship with self, other, land and whole. The first round of aspiring humane technologists stumbled around alone in the dark to explore these questions. We must provide a stronger foundation to the next wave, building towards some new form of education that can be replicated and localized. Let’s start emphasizing the need, inviting conversation, experimenting in groups, sharing learnings. I envision a future in which a variety educational offerings exist for technologists to do their appropriate inner work in a structured, holistic and communal way. Where the path and livelihood of the humane technologist is viewed as courageous, honorable and sacred. How can we use the power of storytelling to paint humane technologist education as an attractive use of energy, thereby hacking the adult development of those coming into power? A New Story Of Success Mark Henson This is exciting! We have a wonderful opportunity to practice and demonstrate a new way together. We have the chance to write new stories. Of what it means to be a good technologist. Of corporate accountability. Of personal and professional success. Inspiring people not out of guilt or ego, but from a higher octave vision of what Dr. Martin Luther King referred to as “Beloved Community” or Charles Eisenstein as “the more beautiful world our hearts know is possible.” The old story about Silicon Valley success is becoming stale for many and unsustainable for all. We need a new one. That trades off personal financial gain for equitable distribution; power over for power with; isolation for interdependence. One that inspires a generation of minds and hearts to dive into this emerging field. But how is one supposed to develop all the traditional skills required to be a sharp technologist, AND afford to learn and do all this other stuff time and money-wise? This is a tall order. Especially if one has caregiving responsibilities, student loans, or other demands on resource. My hope is that cultural and financial barriers will lift for those who are committed to this path, as capital gradually begins to flow into the space and cultural trends around what constitutes a “good life” rapidly shift. While also acknowledging that some amount of trust, risk and creativity is often required to create things that are just and beautiful. Supportive trends and possibilities include remote work and community living, public sector investment and favorable policy changes, UBI and debt jubilee, new cohorts of regenerative investors and a spirit of reciprocity from beneficiaries of Big Tech gains, accessibility pricing in education, excessive consumption and wealth hoarding becoming increasingly distasteful, young people dropping out of or forgoing traditional college. Joshua Mays The path of the humane technologist is a beautiful one because it provides purpose, self learning, world learning, health, wisdom, community, service, maturation. It’s a context to learn about all these priceless things that make you a more complete human. What is that worth to you? What does your heart yearn for? I’m not just talking about a sense of pride in doing the right thing. I’m talking about creating connections across lines of difference. Knowing for yourself what is true. Looking at the ugly parts of yourself that you never thought you would touch in this lifetime. Opening to the full spectrum of emotions and textures of the human experience in ineffable moments. Discovering all the ways you were made to contribute to the dance of life. Creating a dream career rich in knowledge, adventure, skills, laughter, strength, generosity, love. Beyond your wildest imagination. I can vouch. Humane technology has been my greatest teacher. Permissioning me to enroll in a perpetual school of life. A priceless investment in self and career, community and world. And it has only just begun. Visionary writer Octavia Butler offers: “All that you touch, You Change. All that you Change, Changes you.” How much are you willing to be changed? Next Steps If you enjoyed this piece, please consider clapping 👏🏼 (you can clap up to 50 times if you liked it a lot 😂), and sharing with others in your community — whether organization, academic class, tech forum or friend group. Comment on Medium with your reflections! I’m practicing moving slowly and not trying to thing-ify every idea that fills me with euphoria, in holding myself to a higher standard of embodying the principles I wish to convey. This has led to a winter of several cycles of exploration and stepping away. In this moment I feel excited about continuing to explore three things: Online Course. Building out the curriculum outlined above and running an online training this year for humane technologists. I am open to co-developing this with individuals and/or organizations that share a similar vision. What do you think ought to be in the syllabus? Community of Practice. Activating a “Regenerative Creating Circle” for a small group of idea stage founders who are ready to embark on a learning journey together in order to practice creating in a new way. This may involve cycling through some of the course material in #1. More details here. Individual Support. Offering personalized coaching and consulting to founders and teams. I am also available to customize retreats in Northern California (this spring) and the New York Area (summer and fall). Humane Technologist Fellowship. Creating new pathways for young people to engage with curricula like this, a la the Thiel Fellowship. If any of the above piques your interest as either a student or collaborator or sponsor, please fill out this form to connect and help shape these offerings in a good way. You may also contact me at andrewmurraydunn at gmail dot com. Thank you for your trust and attention, 💙☀️ Andrew Murray Dunn Andrew is a student and teacher at the intersection of humane technology and personal development. He supports entrepreneurs in crafting Learning Journeys, Impact Plans, Thought Leadership & Personal Retreats. Website | Substack LinkedIn | Facebook Twitter | Instagram Special thanks to the following friends who have provided feedback on this piece: Belinda Liu, Gabi Jubran, AJ Goldstein, Ty Hitzeman
159
Flanderization
By Winston Rowntree of Subnormality . "I think Homer gets stupider every year." — , Professor Lawrence Pierce The Simpsons 138th Episode Spectacular The act of taking a single (often minor) action or trait of a character within a work and exaggerating it more and more over time until it completely consumes the character. Most always, the trait/action becomes completely outlandish and it becomes their defining characteristic, turning them into a caricature of their former selves. Sitcoms and sitcom characters are particularly susceptible to this, as are peripheral characters in shows with long runs. The trope is named for The Simpsons character Ned Flanders, who was originally depicted as a friendly, generous Christian neighbor and a model father, husband and citizen, making him a contrast to Homer Simpson. As seasons progressed, however, he became increasingly obsessively religious to the point where he eventually embodied almost every negative stereotype of the God-fearing, bible-thumping American Christian evangelist. Note that the key to this trope is in how the process is a gradual thing: the character starts relatively normal with a few quirks, the quirks become more prominent, then the quirks gradually become the character. If it is simply about how the character is different early on before the writers know what to do with them, that is Characterization Marches On. In general, comedies, especially sitcoms, fall into the trap of Flanderization because Character Development is far less important than Rule of Funny. Given a choice between getting a laugh or moving the story forward, the former will almost always take priority. Flanderization doesn't have to be a bad thing — sometimes it can be used to expand on a background character's personality when they are brought to the foreground, or make an otherwise bland character stand out more. It can even be beneficial on a cast-wide scale: A comedy that has a cast full of zany, outsized personas will probably be funnier than one full of nondescript straight men. When Flanderization occurs as the result of adaptation from one medium to another (manga to anime, for example), it's Character Exaggeration and frequently a sign of Adaptation Decay. May sometimes be related to Lost in Imitation. See also Never Live It Down (when the character is more associated with some action or event than the character actually changing), Unintentionally Sympathetic (when realistic quirks are mishandled by the writers) and Forgot Flanders Could Do That (when a story brings back pre-Flanderized aspects of the character as a reminder that those traits are there, even if you don't see them much any more). Compare Temporarily Exaggerated Trait, which is like flanderization but only done temporarily. Compare and contrast Early-Installment Weirdness (as it applies to characters), with early depictions of a character being different from later ones simply because the producers hadn't figured out what role they should play in the story. See also Moral Disambiguation for when the morality of a work becomes more black and white. The opposite to this trope is Character Development, naturally. The Other Wiki has their own article . It credits us! Here's a list of cases of Flanderization: Billy Mays had gotten a lot louder over the years. Compare his earlier ads to the later ones. Regis Philbin in ads for TD Bank. At first he started off as a person who would question what the bank offers. In later ads, he didn't know what electronic banking was and called Kelly Ripa every hour about his balance. Dilbert has played with this trope over the years. With some characters it's played straight: the Pointy-Haired Boss went from an ordinary Mean Boss to a complete moron, while Wally went from Brilliant, but Lazy to just lazy. Alice went from being a brilliant engineer with a low tolerance for stupidity to being defined by regularly screaming and punching people (or injuring/killing them with just a look.) On the other hand, Dilbert himself has become less of a nerd and more a mixture of Everyman and Only Sane Man. While most of the FoxTrot characters had their personas taken to the extreme at times, Andy was quite extremely Flanderized, going from a simple, caring and concerned mother to the Granola Girl Moral Guardian of the strip who serves her family earth-friendly fare like braised zucchini every meal, keeps the thermostat so low that it flash-freezes the steam from a cup of coffee, and throws a fit if she catches the boys playing a violent video game. Unfortunately, since the series became Sunday-only, there's little chance of her changing. On the other hand, before this happened Andy pretty much didn't have a personality at all beyond Mom. Roger Fox started off as a classic sitcom dad, somewhat bumbling but still level-headed and respectable. Though he wasn't very good with computers, the series started in the late 80s when they weren't nearly as common. Over time he's gone into Too Dumb to Live territory and possibly even beyond; his computer ineptitude has progressed into full-blown Walking Techbane, he gets beaten in chess by 4-year-old children, and he almost blows himself to Kingdom Come every time he tries to grill hamburgers. Which is not to say this didn't happen to other characters, too. When the series started Jason Fox was a smart 10-year-old, but still hated school as much as his siblings. As the series progressed he's become a full-fledged nerd, possessing extensive knowledge of computing and physics far beyond his years, loves school so much that he dances with joy over hard tests, and once brought down the entire Internet with a single virus...but he still believes that The X-Files, mutant powers, and undead Ringwraiths are real. On the other hand, Peter de-Flanderized from a bit of a Jerkass into a Straight Man. Outside his massive appetite, he's probably the most normal of the Fox family now. Garfield managed to invert this trope, then play it straight. He started out very lazy and sarcastic, but de-Flanderized into a more playful attitude by the late eighties. Over time, he's gradually shifted back into his more cynical self. Garfield's (the character) de-/re-Flanderization pretty much mirrors the strip perfectly (as you'd expect). It began as a Slice of Life strip, but as the character became less Flanderized, the strip shifted to a light surrealist style, which set the tone for the franchise as a whole (probably best shown in Garfield and Friends ). Around the mid '90s the strip shifted back to the slice of life style, becoming re-Flanderized into the strip we know today. There's also Garfield's appearance and mannerisms. Originally, aside from a few human-like quirks, Garfield looked and acted like a typical cat. Overtime however, he was depicted with more and more human like behavior until it's extremely rare to see him engage in any feline activities anymore. Played straight with Jon Arbuckle, who started as being the Straight Man and a bachelor who cared for Garfield. During the first months of 1979, he was Flanderized into being the Straw Loser compared to Garfield (with his role of the Deadpan Snarker going to Dr. Liz Wilson), and by the late 1990's, he was given a more Cloud Cuckoo Lander personality that occasionally borders on being a Manchild. Odie was simply a standard dumb dog in his earliest depictions, as in, his low intelligence didn't expand past being a typically standard and fictional canine. Nowadays, his stupidity is greatly exemplified on every occasion and topic it's shown and brought up. On certain occasions, he's treated as much a dog as merely an epitome for any sort of low intelligence. Peanuts : Charlie Brown, somewhat surprisingly, was a victim of this trope. In his earliest strips, he was basically the prototype of Calvin from Calvin and Hobbes in that he was a wise cracking rascal who often got the upper hand. For instance, he comes across another character who is sweating and asks her if anything is wrong. "I'm hot!" she says. "You don't look so hot to me," he retorts before running away with a smirk. Another strip has him trying the patience of his friends as he fussily arranges them for a group photograph before playfully announcing that his camera doesn't actually have any film in it. As he runs away with his angry friends in close pursuit, he laughs and says, "I like to live dangerously!" In the very earliest strips, he even had girls fighting over his affections. Some vestiges of the later Charlie Brown were there in the early days with the other children being somewhat crueler to him than they were to their other peers, but it wasn't until later that his tendency to always come in last became such an overpowering part of his characterization. The first two girl characters to appear in the strip, Patty and Violet, suffered heavily from this as the strip went on. Initially they were (usually) portrayed as nicer characters than the two boys (Charlie Brown and Shermy), but as the 1950s went on they started to adopt "mean girl" personas and spent most of their time together putting Charlie Brown down. At first their attitudes seemed like a response to Charlie Brown's arrogant "nobody likes me despite how great I am" streak in the mid 1950s, but by the time he had fully flanderized into a total sad sack by the early 1960s it seemed like they were just piling on him for the sake of it. By the late 1960s they had been phased out as regulars in favor of Lucy, who Charles Schulz felt worked much better as a female "bully" character. Since Greg Howard stopped writing Sally Forth , Ted has become quite the Manchild. This is a bit of an odd case, though, as Francesco Marculiano has admitted that he based his portrayal of Ted somewhat on himself; the Flanderization was almost a complete coincidence. KikoRiki : Early on in the show, Krash was a goofy kid, but still had control over his actions and his friendship with Chiko was less complex. However, as the series progressed, he became nearly a menace to the KikoRiki, the biggest example being Deja Vu , where he got his friends in a huge danger while thinking he was having fun. Chiko was a nerdy and shy hedgehog who spent most of his time at home and occasionally checked his health. Later on, he became The Paranoiac who suspects someone is watching them and became dependant on Krash's presence, despite the obvious danger of being around him. Wally started off as a hopeless romantic, who was deeply depressed but kind at heart. Later on, he became a Nervous Wreck whose obsession over Rosa passed the limit, with him hopelessly trying to attract her attention. Rosa. She was initially portrayed as a kind young girl with a dream of becoming a princess, and occasionally teasing her crush Wally. Later on, however, she became easily angered, manipulative and ruthless, with one episode centering on her making hypnotising dolls so she can turn the entire world obedient and girly. Dokko started off as an Absent-Minded Professor, who knows about everything, from nanotechnology to inventing devices to alter voices. In some episodes of New Season, these traits are dropped, making him a "pseudoscientist" (called this by Tatyana Belova, a writer of those episodes) whose attempts to grow strawberries result in making a bunch of manure and starts talking with trees, bugs and other inanimate objects in hopes of studying them. Olga's cooking abilities have also downgraded in some episodes of New Season thanks to the aforementioned Tatyana Belova. Usually, she is an excellent cook who occasionally screws up due to her forgetfulness and clumsiness (which even became a plot point in PinCode's "Multi Cook"), but Tatyana Belova puts her as, quote, a "chucklehead who can't cook", with her burning honey pancakes she tried cooking for Barry in "Valhalla" while she continued burning them with a straight face, and her stew in "Winter Tale" being horrible in both taste and design, looking like a brown mush. In Bad Press , Starscream is incensed to discover that his portrayal in various fanfics about the Decepticons either exaggerates a few of his traits into being his entire personality, or just has him acting in ways he never would. "Starscream snapped at (Megatron), ruffled to see himself stretched out across reams of bad prose, twisted into unrecognisable beings." Actually invoked by the Aperture Scientists in Blue Sky. In their Brain Uploading of Wheatley, they deliberately messed with his personality so that the part of him that produced terrible ideas would be dominant and would override other aspects of his personality in a crisis. Invoked in Eugenesis by Sygnet, a Decepticon scientist, who sees Galvatron as having the worst parts of Megatron's personality amplified a thousand-fold. Fashion Upgrade ( Miraculous Ladybug ): In-Universe, Kim bases his portrayal of Hawkmoth on footage of his brief outing during Heroes' Day, amping his Large Ham villainous posturing up to eleven. This happens a lot in The Loud House: Revamped to characters featured in this fanfic. Estee's Triptych Continuum features a kind of In-Universe version of this with falling into the mark, an extremely common psychological disorder among ponies where the pony allows their special talent to dominate their lives to the point where there is nothing outside the mark. And turned horrifying in A Mark Of Appeal, with the discovery of a disease that amplifies the mark magic until it renders the pony unable to do or think of anything that is not the exercise of their talent. Cloudy with a Chance of Meatballs : Baby/Chicken Brent has the catchphrase of "Uh-oh" from his time as the mascot of the Baby Brent Sardine company. In the first film, he says it pretty often. The sequel, on the other hand, he says it so much it's reached Verbal Tic levels. The LEGO Movie 2: The Second Part : Lampshaded with Benny, who asks Queen Watevra Wa'Nabi how she knew he loved spaceships when they meet for the first time, only to immediately follow it up by proudly boasting "Loving spaceships is my one defining trait!" Happens to several characters in Alan Partridge: Alpha Papa . Dave Clifton goes from a fairly unremarkable provincial DJ who Alan made nasty jokes about his alleged alcoholism to, to gleefully and openly recounting his experiences with booze, cocaine, heroin and prostitutes. Lynn, previously barely mentioning her Christianity, suddenly becomes notably more openly religious. Curiously, Alan himself undergoes the opposite, becoming less distinct and more general a character in the movie. The Weyland-Yutani Corporation in Alien . Each film, book, and game seems to exaggerate their attempts to exploit xenomorphs for profit just that much more. In the first film, they learn about the xenomorphs and try to use the crew of the Nostromo to bring one back to Earth for study. Beyond that, they don't seem particularly obsessed with it; it's just some lifeform they think might be valuable. Eight movies and a ton of comics and video games later, Weyland-Yutani seems obsessed with studying and profiting off xenomorphs to borderline suicidal degrees, to the point you start to wonder how the company even turns a profit when its only projects seem to be screwing around with alien bugs. Their amorality is also more and more exaggerated; in the first movie they're willing to sacrifice a tiny crew of glorified truckers to get the alien, while later works show them sacrificing ridiculously huge amounts of people to their bug hunts. It's easy to lose count of how many colonies and research facilities are destroyed for the sake of studying a species that really doesn't seem to have any practical applications. The first two sequels to the horror classic Child's Play flanderized Chucky into becoming a Stupid Evil villain who was incredibly incompetent and never achieved his main objective, and instead wasted time going on killing sprees rather than possessing Andy. The horror comedies played the whole killer doll concept for laughs, possibly because the writers realized how silly the franchise was becoming. But then came Curse of Chucky , which brought Chucky right back on track. While few would consider the Leprechaun films masterpieces, the first film at least was a straight slasher movie with some comic relief. Even then, most of the humor were either sight gags or rather crude puns. By the second installment the series started amping up the humor and self-awareness to the point where it overtook the horror elements. In later sequels the Leprechaun impersonates Elvis and John Wayne, speaks entirely in rhyme, becomes a binge drinker and pot smoker, and most egregiously, raps. Angela in the Eddie Murphy movie Boomerang originally started out as really laid back, and in one scene where she finds out where Marcus cheated on her, she tells him to "Stay the fuck out of her life!"; later on in the movie her cursing habits are Amped Up To Eleven whereas, before that scene, she was relatively mellow. Probably the best-remembered characteristic of Chinese detective Charlie Chan is his use of pithy "Oriental" aphorisms — a trait which comes directly from the Warner Oland film adaptations, and which were the only aspect of those adaptations that Chan's original author Earl Derr Biggers himself heartily disliked. Not surprising, given that he'd explicitly created Chan as a subversion to the tired "fortune-cookie" stereotypes then popular. Eddie Wilson in Eddie and the Cruisers started out as a serious musician who wouldn't sell out. By the end of the sequel, his only emotional response was to run away from anything that might be critical of his music. Death in the Final Destination movies. In the first movie the deaths were freak accidents with only minor supernatural intervention from Big D — with the exception of the poor teacher's hilariously Rube-Goldbergian demise. Apparently it was the most talked-about death scene so subsequent installments gradually increased the complexity of the deaths. Flodder : The characters got more cartoonish as time went on. Ma Flodder became more of a mean drunk, Johnny became more of a criminal schemer, Male Kees became more of a Manchild, Female Kees became more of a Dumb Blonde, and the two youngest children became even bigger brats. In Freddy vs. Jason , Jason Voorhees is taller, slower and more stupid than ever, possibly in order to more sharply contrast with Freddy Krueger. In The Hangover Part II , Alan's Manchild traits are driven to higher levels. The Hangover Part III manages to bring it further. Agent Tom Manning from the Hellboy film series. In the first movie, while not a very good field leader, he was still a competent bureaucrat; he and Hellboy butted heads but ultimately gained a bit of respect for each other, and bonded over cigars. The second movie made him almost completely incompetent, and reduced him to bribing Hellboy with cigars to keep him in line. Maybe justified by the fact that, without the professor, there is no one who can truly keep Hellboy from doing something stupid. Help! : The film versions of the band were supposed to exhibit exaggerated versions of their own personalities: John as a snarky smart aleck, Paul as a smooth lady killer, George as a miserly spendthrift, and Ringo as a gloomy misfit. Most of this gets lost in the finished version, though. Whereas A Hard Day's Night showed the band as individuals, Help! now shows them as one band living in the same house with four separate doors. Brought up in-universe in Inception . Part of what makes Cobb understand that his mental projection of Mal is fake is that it’s basically an idealized Flanderization of the real person; taking her love for him and acting as if that was her one and only personality trait, with nasty results. Marcus Brody in the Indiana Jones films. In Raiders , he was a Reasonable Authority Figure who was implied to have had Indy-like adventures of his own in his younger days. In Last Crusade , he was a totally Absent-Minded Professor who got lost in his own museum. The James Bond franchise becomes increasingly campy over the course of its history, with increasing reliance on implausible action scenes, cartoonish villains, science fiction gadgets, Bond One Liners and loads of sex. The Roger Moore era was considered the height of the franchise's campiness, while the following Timothy Dalton films were an attempt to make the franchise darker. The campiness came back over the course of the Pierce Brosnan era. Casino Royale (2006) was specifically created to completely eliminate the campiness and return to the franchise's more realistic roots. It succeeded for a while, before careening off again in another direction: melodrama. The Lethal Weapon series gives us Dr. Stephanie Woods, who, in the first film was a competent psychologist with legitimate concerns about Riggs' stability. By the third film, she was an inept, touchy-feely shrink who served as little more than comic relief. It's heavily implied in Lethal Weapon 4 that it was actually Riggs' deliberately toying with her over the course of the series that pushed her to this point. In the Marvel Cinematic Universe Tony Stark in Iron Man was a charismatic celebrity billionaire who could walk into any group of people and become the center of attention because of his fast talking wit. After his life changing experience being captured by terrorists, his moral compass changed dramatically but his personality remained the same, still very fast talking and had a quick wit. This was largely due to Robert Downey Jr. and the director deciding on a lot of improv so they can take the best material from that. Starting with Iron Man 2 his wit was replaced with making a lot of quips, generally refusing to take a situation seriously and often easily distracted. This continued through The Avengers and Iron Man 3 before reaching its apex in Avengers: Age of Ultron (it's even been argued that due to the success of RDJ as Tony Stark that this trait has been pushed through much of the MCU, with almost every individual series having a lot of quipping characters even though it has little basis on their various comics counterparts). This trait was largely reversed in Captain America: Civil War as Tony was going through a lot of personal issues and looked constantly tired, and we see him return to that "charismatic celebrity" personality when he went out to recruit Peter Parker, which was maintained with Spider-Man: Homecoming . Happens to Drax in the Guardians of the Galaxy movies. In the first film, he is intelligent but Literal-Minded and doesn't understand figurative speech. The movie is vague as to whether his native language doesn't use idioms, if he doesn't get human expressions, or if he's just dense in more ways than one. But in the sequel, this trait is expanded to him having no verbal filter whatsoever, saying whatever is on his mind regardless of how wildly inappropriate it is. Thor started off as a loveable Fish out of Water who needed to learn how to be less arrogant but was otherwise an intelligent and noble warrior. In more recent movies, he’s a Manchild who can’t understand basic concepts such as Peter Quill owning his own spaceship and is often self-serving to the point where he often cracks jokes or sulks when he should be going off to fight evil. In the original Men in Black , K is a rather stoic individual who takes his job seriously, but approaches everything with a calm demeanor, contrasting J who doesn't take the job seriously, but overreacts to everything. By Men in Black 3 , K is so stoic, he is unable to crack a joke or a smile. Happened to Peter Sellers' Inspector Clouseau in The Pink Panther series. His French accent was originally straightforward, though A Shot in the Dark introduced odd accent-based pronunciation quirks ("beump" for bump, for example). When he revived the character in the mid-1970s, the accent was significantly thicker and the mispronunciations were more frequent ("minkey", "rheum", "leu"), etc. Other Shot in the Dark elements became Running Gags too: he donned more bizarre disguises with each film, and Cato's attacks grew increasingly destructive, as did the slapstick in general for the whole run of films. However, this went over like gangbusters with audiences and it didn't violate Clouseau's basic character, making it one of the less destructive examples of Flanderization on this list. A Nightmare on Elm Street : Freddy Krueger was originally a dark, scary supernatural killer with a twisted sense of humor. But starting with the third film in the series, Freddy's humor began taking center stage. His quips and killing tactics also became cheesier with each subsequent installment. Freddy was fully Flanderized by the end of the series ( Freddy's Dead: The Final Nightmare ), having become a campy, pun-spouting parody of his former self. This version of Freddy has been likened to an evil Bugs Bunny. The character was returned to his dark/scary roots in the 1994 film Wes Craven's New Nightmare . However, this film takes place outside of series canon; it's also not the "real" Freddy, but an ancient demon who adopted Freddy's persona. The 2010 ''Nightmare'' reboot featured a serious, darker Freddy. But "Reboot Freddy" has not supplanted Robert Englund's version as the "official" Freddy Krueger in popular culture. So the "real" Freddy is still campy as ever, per his most recent canonical appearance (2003's Freddy vs. Jason ). In the first Honey, I Shrunk the Kids movie, Wayne Szalinski is a brilliant scientist despite a few flaws that he has. The Direct to Video sequel Honey, We Shrunk Ourselves turns Wayne into a Genius Ditz at worst, considering the reason why and how he and Diane got shrunken. Pirates of the Caribbean , originally an Affectionate Parody and homage to the pirate genre, became a parody of itself after the first film, when all the character traits, quirks, and set-pieces that were more subtle in originally were subject to relentless self-referencing. In the first Rush Hour movie, Chris Tucker's character (whose Butt-Monkey status stems from his Cowboy Cop tendencies alienating everyone around him) is actually a fairly competent detective, but simply not as combat effective as Jackie Chan's character. In the sequels, his character's competence is completely jettisoned, he becomes a classic Small Name, Big Ego, and much Uncle Tomfoolery ensues. Star Wars movies: Yoda's diction for the most part simply swapped nouns and verbs in certain situations in a manner similar to some Earth languages. It also appears that he may have been doing this intentionally just to get on Luke's nerves; once Luke figures out who he is he starts speaking somewhat normally for the rest of his screen time. This was exaggerated in pop culture, leading the writers of the prequels to make up lines such as "Not if anything to say about it, I have!" More precisely, when he first appeared in The Empire Strikes Back , his grammar actually stayed within the bounds of normal poetic English, and the most egregious features like clefting were 1) largely limited to his "crazy swamp Yoda" persona, and 2) used in extremely short sentences, like "suffer your father's fate, you will". Meanwhile, in the prequel trilogy, these features become a lot more common and a lot more extreme, to the point of lines such as "Confer on you the level of Jedi knight the council does, but agree with your taking this boy as your Padawan learner I do not" While C-3PO clearly wasn't as brave as R2-D2 and could express fear, in the original film he was still capable of self-sacrifice, even urging Luke to abandon him after 3PO had been badly damaged by the Sand People. Later, when he was cornered onboard the Death Star by some Storm Troopers, he managed to successfully bluff his way past the Troopers ("They're madmen! If you hurry you might catch them."). He also felt grief for Luke and the others when he thought they were dying. In The Empire Strikes Back , he's Flanderized into a total coward, unable to think about anything but himself, and frantically advocating surrender when the Falcon is being chased by Imperial Destroyers. What nuance he regained in Return of the Jedi (being highly instrumental in gaining the Ewoks as allies and again able to provide a momentary distraction) was lost in the prequels, where he contributes nothing except extremely lowbrow humor. Since the Disney purchase, however, Lucasfilm seems to be attempting to avert this; All There in the Manual for The Force Awakens reveals he has become the Resistance's droid Spymaster. He doesn't really have any noteworthy scenes The Last Jedi , but he is very important to the plot of The Rise of Skywalker . The Trade Federation Battle Droids were originally introduced as mindless animatronic soldier drones. They were just human enough to be used for slapstick, but were still inhuman enough to be mildly menacing. As the decade wore on the writers slowly morphed them into a big goofy gang of cowardly Harmless Villains. It is a joke among many that Stormtroopers cannot aim and are incompetent in their roles, something that has crept into the franchise itself through numerous references. In the Original Trilogy however, this only really applied to when they faced the protagonists. Their introduction in A New Hope features them curb-stomping the garrison of the Tantive IV with ease, and the troops investigating the whereabouts of the Death Star plans on Tatooine come close to finding them. Later on in the Death Star proper, the guards quickly see through Han and Luke's disguise, and in the ensuing battle are under orders to capture them alive. In The Empire Strikes Back, the Snowtroopers again make short work of the Rebels at Echo Base, and the Stormtroopers at Cloud City are again under orders to keep the heroes alive. It is in Return of the Jedi that the portrayal of Stormtroopers as incompetent pushovers began to set in especially since they were defeated by primitive teddy bears. From that point onwards, the franchise only depicted Stormtroopers as rank-and-file disposable mooks instead of elite soldiers originally. Even more serious installments like Rogue One and Andor still depicted Stormtroopers as non-threatening enemies; in the former, a blind monk curbstomped a dozen Stormtroopers with his staff while the latter showed a Stormtrooper getting head-butted into submission by a civilian scrapper. When his character debuted in The Force Awakens , Poe Dameron was depicted as a selfless (if occasionally cocky) starfighter pilot who evoked the amiable charisma of the original trilogy heroes. He was heavily Flanderized in The Last Jedi , now being depicted as a brash, argumentative, rebellious jock who disobeys orders and finds himself at odds with the secretive, no-nonsense Admiral Holdo. In The Rise of Skywalker , he was Flanderized even further, as it's revealed he used to be a spice runner (the in-universe term for a real-world drug smuggler) and part of a cartel until he abandoned it to become a starfighter pilot. Despite the revelations about his criminal past damaging his already marred reputation even further in the eyes of his friends, Poe redeems himself at the film's climax, successfully leading a squadron in the terrifying and deadly Battle of Exegol. Gale Weathers of the Scream films. In the first, she's simply a very-determined reporter who resorts to bending the rules to get her story. She mainly has a beef with Sidney because they have differing viewpoints on one person - Sidney believing Cotton to be her mother's killer, Gale viewing him as an innocent victim of an abusive affair with Sidney's mother (and she turns out to be right!). She's given several softer moments with Dewey and the only other person she's outwardly nasty to is her cameraman Kenny. By the time of the second film, she's suddenly shamelessly bitchy with poor social skills who snipes at everyone she comes across. While the trauma of the first film could have had some effect, they act as if Gale was always this way. The Matrix : Sunglasses and long coats. In the first film they actually took a relatively realistic approach to them: Neo and Trinity wear them in order to hide their belts of weapons from the security guards in the Agent's office block and then immediately ditch them before commandeering the helicopter. Even Morpheus has lost them by the time he is captured by Smith. You will note that after these points we never see any of the main characters wear them again until the closing scene with Neo flying away. Come the sequels and everyone is wearing sunglasses and/or a long coat in more or less every single scene throughout the movie even when it makes no sense such as when holding a mission briefing in a darkened room. Dumb and Dumber : Harry and Lloyd. In the original, they were stupid but had some common sense and their own individual personalities. In Dumb And Dumberer When Harry Met Lloyd , their stupidity is pretty much their main personality trait, if not their only personality trait! In Major League , Roger Dorn was a veteran ballplayer who was a superstar in his prime, and even at the tail end of his career had plenty of skills and talent left. He's just stopped applying himself, coasting on his past and slacking off in the field to avoid injury limiting his options during upcoming contract negotiations (thus not making the plays he should be making). Jake Taylor calls him out on his attitude and behavior, and how his lackadaisical play is spoiling things for the younger players who may be getting their only shot, while also reminding him that he used to be a great player, eventually leading to him straightening out and playing like the star he is. Come Major League II, Dorn is whiny, incompetent, and is presented as a complete joke of a player, whose announcement that he's reactivated himself is met with annoyance by his teammates. Jay and Silent Bob received quite a bit of flanderization over the course of Kevin Smith's The View Askewniverse film series. Jay became wilder, stupider and more perverted. In the original Clerks , he's shown conversing with a girl quite casually (despite insulting her when she arrives) whereas in the later films like Jay and Silent Bob Strike Back he can barely talk to a girl without it being completely sex related. The number of F-bombs he dropped also seemed to increase with each film. Silent Bob's characterization actually went back and forth between films. He spends most of the original Clerks smoking cigarettes and didn't really do anything until towards the end where he dances with Jay and later gives the convenient store clerk, Dante, some helpful relationship advice (this would later become the defining trait of the character). The following film Mallrats however portrayed a much sillier Bob, where he partakes in some goofy antics and is much more expressive. Chasing Amy went back to the more reserved and deadpan Bob from Clerks, Dogma did a mixture of the two personalities and from Strikes Back onward he is goofy again. In Jay and Silent Bob Reboot he takes a serious level in dumbass, to the point where he often comes across as barely smarter than Jay. Just how silent Silent Bob is varied quite a bit - from a guy who just isn't into idle chitchat (and who has a companion more than willing to make up for said lack) to someone who resorts to charades when conveying extremely relevant information only he knows. In Teenage Mutant Ninja Turtles , Donatello demonstrated a certain knack with mechanical contraptions, as emphasized in the cartoon and toy line. In Teenage Mutant Ninja Turtles (2014) , Donatello speaks in a nerdy voice, and wears high tech, spectacle-like goggles to emphasize that he is a nerd. The Terminator had the T-800 models gradually developed a fixation with sunglasses. In the original movie, the only reason the evil T-800 played by Arnie wears shades was to cover up his face after a car crash ended up exposing his robotic eye. However, this ended up becoming such an iconic image, to the point that it was used for the publicity still that ended up serving as the film's poster. In Terminator 2: Judgment Day , he ends up stealing a biker's pair of shades for no reason other than to wear them for the most of the first half before losing them while rescuing Sarah Connor from the mental asylum, while in Terminator 3: Rise of the Machines his obession with sunglasses becomes an actual character trait to the point that he is particularly selective over which kind he wears. The young T-800 Arnies that appear during the flashbacks in Terminator Genisys and Terminator: Dark Fate also has him wearing sunglasses for no particular reason. Everyone in the first three Transformers films, but special mention goes to Sam's parents. In the first film, they're fairly typical embarassing parents; the Dad is cheap and obsessed with lawn care, and the Mom is (as Rifftrax puts it) shrill and annoying. In one scene she openly talks about Sam masturbating, but in the privacy of their own home, and she apologizes for it when she sees how awkward it made everyone. In later films, shouting embarassing things in public seems to be her favorite thing to do, and she doesn't care what anyone thinks about it. The Dad also seems to become less sophisticated as the films go on. In the first film, he's shown relaxing with a glass of wine. In the second film, he's drinking a Budweiser at a restaurant in Paris, while threatening a mime. In Animorphs , this happened to Rachel, though it was intentional. All the characters were Flanderized, actually, to a lesser extent. Cassie was the most notable (other than Rachel) — she goes from a slightly more moral person than the others to someone who couldn't stand to kill Visser 3. Note that her Flanderization was mostly reversed after Book 45. Jake was noticeably Flanderized as well — his leader angst goes from mild to extreme, until the last book, at which point he feels like Tom and Rachel's death was his fault and becomes clinically depressed. The Flanderization was, really, the point. The war took whatever aspect of their personalities was most useful to the fight (bloodlust, strategizing, manipulating people, etc.) and forced them to exaggerate it until it ate the rest of their lives. Anita Blake suffers from this in regards to Anita's sex life and sexually-fueled magical powers, to the point where the longest book in the series to-date barely managed to get out of the bed, to say nothing of the bedroom. The Baby-Sitters Club : While she was originally just an aversion of the Model Minority stereotype, Claudia Kishi, despite being in eighth grade for about ten years, eventually gets to the point where she can't even spell her friends' names — or her own! Despite being able to spell them perfectly well in seventh grade, mind. Most of the other girls' quirks (Kristy's bossiness, Dawn's environmentalist soapboxing, Mallory's geekiness, and Jessi's anxiety about her race and dancing skills) suffered this to some degree, as well. Margo Pike's motion sickness. In Boy-Crazy Stacey, Margo almost gets car-sick on the way to Sea City but feels better once she moves to the front seat. Somehow, this turned into pretty much her only character trait, to the point where it was surprising she could walk down the street without getting sick. Diary of a Wimpy Kid : Greg's Straw Loser status has become a lot more noticeable as time went on, to the point where the plotlines of the book starts off good, but only goes worse and worse for Greg. Rowley, Greg Heffley's best friend, was a normal kid who showed a slight interest in things outside his age group, but was otherwise coming along well. Following The Last Straw, Rowley's personality became a lot more juvenile and so did his interests, to the point even Greg felt embarrassed to have him as his friend. However, as of the recent books, this is being slightly undone with Rowley finally growing up from the kiddy interests. In fact, there's a little bit of hints that unlike Greg, Rowley is actually advancing as a teenager. Susan Heffley was originally an amazingly-embarrassing mother who was a bit behind on what kids Greg's age liked and was slightly skeptical of modern technology. She did show a stern side, such as when Greg broke Rowley's arm in the first book and then Rodrick's party in Rodrick Rules. The later books made her a strict moral guardian who thinks all new technology is denigrating children's social life, to the point she even made the town ban technology for two days. However, she still has the occasional embarrassing side. Manny Heffley started as a spoiled, mildly socially awkward child whose nasty prankster side would come out occasionally and showed surprising intelligence. In later books, this reached the point where he refused to socialize with children, yet was clever enough to turn the electricity off in his house and even speak fluent Spanish. It gets even worse with Manny in Wrecking Ball, where he LITERALLY BUILDS AN ENTIRE HOUSE BY HIMSELF. Discworld : The characterisation of Rincewind shifted from sensible fear of the unknown to full-fledged cowardice, and finally to having an entire philosophy based on the principle of running away from things. Then again, in his first appearance Rincewind was a greedy Jerkass who is only protecting people so the Patrician doesn't have to kill him. By Sourcery he Took a Level in Kindness and became a Jerk with a Heart of Gold Punch-Clock Hero. There's also Willikins, who started out as Standard Issue Butler #48592, and then his combat skills were established in Jingo and his street-fighting past in Thud!. By Snuff a lot of his lines revolve around his ability to kill anyone with anything sharp, and he doesn't even bother to put on the Jeeves impression. Granny Weatherwax's first appearance has her as a very competent witch (her Wizard Duel with the Archchancellor of Unseen University ends in a tie), but she isn't portrayed as being anything out of the ordinary as witches go, and doesn't receive an inordinate amount of respect. A few sequels later and she is The Dreaded, with entire species having titles for her that basically translate to "Stay the hell away!". Her stubbornness, strong will, grumpiness, and pride have all been exaggerated as well. Perhaps because of this development into a walking Deus ex Machina, Granny Weatherwax has not starred as a focus character since Carpe Jugulum , with Tiffany Aching becoming the new protagonist of the witches stories, with Granny Weatherwax as a mentor. There always seems to be a hint that Granny could deal with the situation (though you might not like her solution) but is letting Tiffany handle it herself because they both agree that being a witch means taking care of your own problems. Vimes starts out as an intelligent and honest but nonetheless flawed man, with a deep sense of righteous anger at the unfairness of the world. As the series progresses his righteous anger is inflated to the point of making him into a semi-Messianic figure incapable of doing wrong — in Thud! he alone is able to resist a very powerful soul-possessing demonic superbeing, which regards Vimes as a worthy opponent and occasionally endows him with supernatural vengeance/crime-detection powers (albeit ones that mainly extend to seeing in the dark and understanding the speech of things that live there). Well, it depends on your definition of wrong — as time goes by, he becomes much more willing to bend the rules, something he worries about. Additionally, while Vimes is increasingly smart and courageous, a lot of that could be put down to his having crawled out of the bottle and displacing his alcoholism with crime solving — something, ironically, that makes him very much like the vampires he hates. Over the course of the series, Vetinari sheds more and more of his weaknesses until by Snuff he is an all-powerful, all-seeing, all-knowing demigod whose only real character trait is "Always right", with it becoming a rule in-universe that nobody could ever replace him. This coincided with him becoming Pratchett's most obvious mouthpiece. Don Quixote : In the first part of the novel, Sancho Panza gives a Hurricane of Aphorisms only once. In the second part, he gives it continuously, and so do his wife and his daughter. David Eddings' series The Elenium and The Tamuli have Kalten, who starts out as Sparhawk's boyhood friend, a talented knight and skilled fighter who can't use magic because their kind of magic requires being fluent in the Styric language, which Kalten can't get the hang of. As the books continue, he turns into someone too stupid to spell his name correctly, for some reason. It later turns out to be Obfuscating Stupidity as he lampshades how he plays up his being Book Dumb to make people underestimate him. I know, this stupid-looking face of mine is very useful sometimes. Kalten: In Bram Stoker's Dracula , Lucy is a flirtatious, yet demure, woman who's Mina's dearest friend. But in almost every adaptation, she's portrayed as a constantly horny slut, who has little regard for her own health and safety, which is often what gets her turned into a vampire by Dracula in the first place. Everworld . In the first book, Search for Senna, the eponymous character was a quiet, withdrawn, and somewhat strange Emotionless Girl who had a mostly positive romantic relationship with David, and demonstrated genuine concern for others on occasions. As the books went on, her negative traits were repeatedly emphasized and expanded, though this was initially saved from being Flanderization by her character also becoming more complex and interesting. In the last two books, her goal of overthrowing the powers of Everworld and crowning herself took over her characterization, and just about all of her other personality traits were thrown out in favor of it. She became an outright sadist, a tyrannical and megalomaniac Evil Overlord who no longer cared at all for how much death or pain she caused if it got her greater power. However, this is subject to YMMV, as some have interpreted it as Senna becoming overwhelmed by the magic of Everworld, and developing a God complex. Played for horror and incorporated as a plot point in Faction Paradox . A Space Cult Colony called the Remote colonists became sterile, and so developed technology to avoid extinction: Remembrance Tanks. You insert some biomass ready for cloning (a corpse) and get a few friends of the deceased so the machine can scan their minds for memories of the dead person, allowing the device to weave them together and form the clone's mind. This had the unpleasant side effect of making every iteration more and more stereotypical, to the degree Remote time-travelers are often disgusted and confused by meeting their future selves, often wondering if they are just that damn unpleasant. In the first Fear Street Seniors book, Jade Feldman and Dana Palmer are noted as being unhappy that Phoebe Yamura got to be head cheerleader instead of them but are generally normal teenagers who are nice enough to their peers and are on equal footing with each other. By the time of the seventh book, Jade doesn't seem to care about anything besides being Head cheerleader and obsessively bullying and insulting Phoebe for taking the position she sees as hers, with Dana as her Beta Bitch. GONE has Drake, who was much more a misanthrope and sadist than he was a misogynist. He seemed to want to torture everyone — Or most — equally, and never seemed to hold too much of a preference of who he wanted to beat up. Yes, he did hate women with a near religious conviction, but he seemed to hate men just as much. Yet in FEAR, he suddenly has some personal vendetta against all females, which is Lampshaded by himself. This is how self-will destroys the damned in The Great Divorce . If one embraces a sin and never lets it go, it overwrites the rest of one's character, and sometimes the rest of one's self. Zoey Redbird from The House of Night series went from a somewhat more advanced vampyre who happened to have a boyfriend in Marked to The Chosen One with an Unwanted Harem by Betrayed. Both Brandon's ignorance and Nick's rage towards it are flanderized throughout The Leonard Regime . All of the Flock from Maximum Ride suffer this. In The Angel Experiment at least they were a bit more realistic and believable. Nudge has gone from an extremely talkative young girl to a materialistic celebrity-obsessed tween. Angel is a manipulative Karma Houdini. Total is now even more of a cartoonish sidekick figure than he was originally. Iggy seems to be getting dumber and more childish in each book. Where in the first three he was treated by Max and Fang as one of the older kids, now he appears to have a mental age of twelve and spends most of his time with Gazzy, who admittedly has a similar outlook and personality, but is way younger than him. Max has started to use Totally Radical slang and seems to be occasionally channeling the spirit of Bella Swan, in the author's clumsy attempt to cash in on the teen romance successes of late. Hannibal Lecter, who first appeared in Red Dragon , was originally just a very intelligent and cultured man, whose expertise in his chosen field of psychiatry made him a particularly dangerous (and somewhat ironic) insane killer. By the (book) sequel, The Silence of the Lambs , he is quite clearly one of the greatest if not the greatest psychiatrist in the world, and by the threequel Hannibal, he's revealed to be a world-class genius in pretty much any field he sets his mind to, from Renaissance art to particle physics. And in the TV series, his brilliance and complete lack of human emotion has evolved to such a degree that he's basically a secular Satan whose plans — no matter how convoluted — practically almost work. Rob Roy : Invoked In-Universe. Main character Frank Osbaldistone describes his parent as a harsh and exacting but fair person who tolerates other ideologies for the sake of being a good merchant, and then Rashleigh talks about him as an opportunist who plays both sides of the political and religious divide to profit at the expense of everybody. Then Frank grumbles that his cousin has turned his portrayal into a caricature. The Shadow , being a fairly extreme Pragmatic Hero, lacked much in the way of a Thou Shalt Not Kill code, and in the pulp novels, he had no problem with simply gunning a man down with his dual .45s. However, he was also fairly surgical with his killings, frequently left criminals to the police, and believed that simply shooting first and asking questions later was a bad idea (largely for practical reasons, mind, as attempting to indiscriminately murder as many criminals as possible is a good way to create more problems), to the point that one story explicitly contrasted him with the copycat vigilante Cobra, who did use those tactics and was treated as a destructively reckless fool. Many later works take the idea that he's okay with killing and essentially makes it his entire character, with him massacring criminals at a moment's notice, often so that a character with a code against killing can look down on him for being a total psycho. Dr. Watson in Sherlock Holmes . While Arthur Conan Doyle's stories portray Watson as quite capable and intelligent in his own right (while lacking the unique imaginative genius of Holmes), it has become quite customary in various fanfic, film and TV portrayals to make Watson a completely incompetent and blundering dullard so as to be a more transparent foil to Holmes, following the example set by Nigel Bruce's portrayal of a bumbling and dim-witted Watson to Basil Rathbone's Holmes in several 1940s films. Of course, portrayals of Watson as an idiot raise the question of why one of the world's most brilliant men would want such a moron as an associate and colleague. Without a Clue satirizes this trend with a genius Watson and an incompetent, fraudulent Holmes. In Neal Shusterman's The Skinjacker Trilogy , Shusterman unveils a world between life and death, where your appearance is based entirely on your memory of yourself. This leads to such effects as remembering only the chocolate smudge on your face and turning entirely into chocolate, or remembering your acute sense of smell and gaining nostrils that extend to your feet. The Lawful Evil villain even encourages this trope as her thousands of followers reenact their "perfect day" every single day (when they're not fighting our protagonist). This example takes the trope in more of a literal sense, as you may have guessed, rather than the degeneration of a character's demeanor. In A Song of Ice and Fire , Hallis Mollen develops a tendency to state the obvious, which gradually becomes his defining trait. Star Trek Novel Verse : Some accuse the Star Trek: Vulcan's Soul trilogy of flanderizing the relationship between President Zife and Koll Azernal, with Zife being an ineffective president relying on scheming Azernal to run the government for him. It is certainly more obvious in this trilogy than in Star Trek: A Time to... . The Brains and Brawn partnership of Rehaek and Torath from Star Trek: Titan is flanderized by this trilogy, too. The novel Before Dishonor essentially Fladerizes Worf, Seven of Nine, and Admiral Nechayev, presenting them in a surprisingly one-dimensional way, taking their various social flaws (Worf's aggressive stoicism, Seven's cold precision, Nechayev's impatience and sharp tongue) and blowing them out of proportion. Or so some readers argue. In the Star Wars Expanded Universe nearly everything mentioned in the Star Wars movies as a side-note is turned by the Expanded Universe into the main characteristic of whatever subject. Darth Vader was also Flanderized to some extent depending on who was writing him. In some novels he was a madman who would kill his own people at the drop of a hat to the point that he would strangle people for delivering bad news to him in his quarters to the point that officers drew lots when having to deliver messages. In other works while he was not a boss most people would want to work for, he only killed those he deemed hopelessly incompetent and were actually at fault when things didn't go Vader's way. Apparently all Corellians find statistical analysis abhorrent, due to the method in which Han Solo told C3PO to shut up in The Empire Strikes Back ("Never tell me the odds!"). "You look strong enough to pull the ears off a gundark." The Clone Wars had its gundarks modeled with ridiculously huge ears. The explanation for one of the designers was "We know about the Gundarks that they have huge ears, so they have to be visible". Just about every Sith falls victim to this at some point or another, especially as one moves further away from their original appearance. Most notably, Revan. In Karen Traviss' early works, the Mandalorians/Clones were badass and the Jedi were somewhat clueless/misguided. Cue later works where the Mandalorians are perfect at everything and the Jedi are basically evil hypocrites. Zabrak (Darth Maul's species) Sith like double-bladed lightsabers better than regular ones. Most species with a single mention in the movies experience this treatment. All Hutts are crime lords, all Wookiees go berserk, all Bothans are spies, all Trandoshans are bounty hunters, all Rodians are more bounty hunters, all Gand are still more bounty hunters, et cetera. One can find exceptions, but source material states this as the rule. Thrawn's strategic ability tended to get exaggerated to a ludicrous extent by later writers. Timothy Zahn, the author of the Thrawn Trilogy, took a shot at this in the Hand of Thrawn duology where, after watching the rest of the galaxy work itself up into a frenzied panic over the mere possibility of the Grand Admiral's resurrection, the main characters all note that Thrawn was good, but he wasn't that good. In the original Strange Case Of Dr Jekyll And Mr Hyde Dr. Henry Jekyll’s evil alter ego Edward Hyde was merely an ugly deformed man of average height who was violent and cruel, engaged in activities that Jekyll would’ve never done, and murdered people, several adaptations depict him as a giant hulking monster and often portrayed very lecherous. Tarzan : The original Tarzan was a Genius Bruiser who managed to teach himself English and French from books his parents left behind, as well as an inveterate prankster with a quick wit and a well-developed but dark sense of humor and a full-blown dandy who loved to look good, even before he came in contact with civilization. All adaptations seem to ignore this in favor of playing up his physical prowess. Jacob Black from Twilight . Over the course of the saga, his initially fairly healthy and respectful affection towards Bella was Flanderized into obsession, probably done to sway "Team Jacob" shippers to be more sympathetic to Edward. For the most part, unfortunately, this ended up only spurring the Team Edward/Team Jacob rivalry even further. Warrior Cats : Hollyleaf starts as the smart one of the group who tries to respect the Warrior Code. By the end, she is completely consumed by the Warrior Code, freaking out if someone even mentions breaking it. This culminates with her finding out her mom severely broke the code and going on a murderous rampage. Far earlier, in the first series, Fireheart's sister Princess is a kittypet who is curious about Clan life but wouldn't want to live that way, and who makes one or two comments on how Fireheart doesn't look like he's getting enough to eat. By the end of that series, she's become a hysterical worrywart terrified of the forest. In the first Wayside School book, Joy steals Dameon's lunch (and frames several other students for the theft) because she's seriously hungry, having forgotten her own lunch, and is said to have suffered extreme guilt over it for months afterwards. In the two subsequent books, one of Joy's major character traits is that she steals things whenever she gets the chance, including, at one point, another lunch (this time showing no guilt at all). One of the main things that annoyed The Beatles about their 'Fab Four' image was how it reduced all four of them to a quick-caption stereotype which lingered — John was the 'funny' one, Paul the 'handsome' one, George the 'quiet' one and Ringo the 'normal' (i.e., less talented and klutzy) one. Lennon in particular has been caricatured as the Genius with a capital "G" within The Beatles, almost as if he was the only intelligent, innovative and creative force within the group. This seriously underestimates the talents of the other band members and especially from the group as a unity. Furthermore most of the Beatles songs everyone sings along to were made possible by McCartney's ear for good melody. If you look at the Beatles solo careers one quickly realizes that none of the individual members were able to have a successful solo career, except for Paul — and even his solo works have a lot of cheesy stuff in it. Most of John's solo output is pretty much hit-and-miss, except for John Lennon/Plastic Ono Band and Imagine due to this tendency to write too many personal songs about his relationship with Yoko, which are quite boring for other listeners. Apart from that Lennon also made some very avant-garde tracks under influence of Yoko Ono that most people will never play more than once. In fact, the bestselling Beatles solo record of all time is All Things Must Pass by George. In modern times, the Lennon/McCartney writing partnership tends to be oversimplified as 'Lennon wrote all the angsty, complex, rebellious and therefore 'good' songs, whereas McCartney wrote all the Silly Love Songs and fluffy album filler.' Which not only tends to unfairly deny McCartney the credit in several cases and do a disservice to several of the songs, but collapses entirely when you remember that Lennon wrote "Mean Mr. Mustard", "Norwegian Wood", "This Boy" and "Dear Prudence" and McCartney wrote "Eleanor Rigby", "Helter Skelter", "Carry That Weight" and "Yesterday". Furthermore, half of the Lennon/McCartney songs were genuine 50:50 collaborations. Lennon did tend more towards Creator Breakdown than McCartney in later years, however. George is thought of usually as either 'quiet' 'mystic' or 'grouchy', but people forget that George Harrison wrote "Something" and "Here Comes The Sun" from Abbey Road and "Savoy Truffle" from The White Album . George was also characterized in works like A Hard Day's Night and Yellow Submarine as being a somber and serious mystic (especially in the latter). His son Dhani complained about this once, as his dad actually had a pretty good sense of humor. The man personally financed Monty Python's Life of Brian just because he wanted to see it and the last letter he ever wrote was to Mike Myers about how much he loved Austin Powers. In A Hard Day's Night George is more "deadpan" than "serious", not only because it was part of his personality but also because he lacked the natural talent for comedic acting of John and Ringo. But he gets two of the funniest bits of the movie: the "what would you call your hairstyle?" joke and the scene where he's mistaken for a fashion model (both of which work well with a Deadpan Snarker). Overlaps a bit with Truth in Television: when asked what would they do with the money they made in A Hard Day's Night , George simply asked the reporter "What money?" Elvis Presley has been shamelessly Flanderized after his death by Elvis impersonators. In his youth Elvis actually was slim with boyish good looks and a pleasant smooth tenor voice with only a little shaking in it. If he was anything like most of his impersonators he would not be nearly as popular as he was in the late 1950s. Guns N' Roses were a basic rock band in every sense and proud of it. Then Axl Rose started to want to make more and more long rock ballads after the huge success of "November Rain" and later "Estranged". Slash, the popular lead guitar player, disagreed with this direction among other personal issues with Axl and left the band. When Liberace started in the 40s he was known for wearing nice suits and styling his hair. Over the years his outfits and personal style got gaudier and guadier until he became more known for his over-the-top flamboyance than for his piano playing. N.W.A, on the album Straight Outta Compton , mostly stuck to an aggressive Gangsta Rap style they called "reality rap", and used quite a few songs to explicitly criticize the conditions and harrassment endured by the black population of Los Angeles. Then Ice Cube bailed, and they became ridiculously over-the-top, violence-celebrating Horrorcore with Efil4zaggin. note Ozzy Osbourne gets this a lot from the press. For example, from the way people talk, you'd think he bites the heads off bats all the time. In truth, he only did such a thing once (completely by accident! note  ) and went to the hospital for a rabies shot immediately after. Pantera. When they hit the mainstream with Cowboys From Hell in 1990, they had a then-unique "street tough" attitude but had no problem getting into some pretty emotional/sensitive topics with their music. Starting around Far Beyond Driven, however, their "toughness" was heavily Flanderized with many songs revolving around Phil's over-inflated ego, and (with a couple exceptions) the band shed all traces of angst and sensitivity. Queen is another example of fan-Flanderization. Due to the publicity surrounding Freddie Mercury's bout with AIDS, many now assume their classic songs are about his illness and/or bout with homosexuality. Freddie was actually bisexual, and he wasn't diagnosed with AIDS until after the release of A Kind of Magic (the band's third-to-last official album before his death). Freddie himself has been Flanderized by his tribute acts; he had so many different looks down the years yet what does every tribute singer focus on? Yep, THAT yellow jacket. Red Hot Chili Peppers tends to get Flanderized by detractors as always singing about California. Their jumbled lyrics also are exaggerated, as has their white boy rap which Anthony Kiedis has largely avoided the past 15 years. AV Club tends to portray them as nigh-unintelligible horndogs. They've had success with ballads, and then did so many that they inevitably became bland and predictable. All of the Spice Girls. Documentaries and interviews with them before they ever signed a record deal show that the styles and personalities they became known for were always there, but after they got some fame and money behind them, and more importantly were given their nicknames by the British media, things quickly went much further. Comparing their early TV appearances when they were more or less just being themselves, to their appearances in late 1996 after Top of the Pops magazine gave them their nicknames when they were hamming it up, then to their peak Flanderization around the time of Spice World in 1997, when they had fully embraced the nicknames and were essentially playing cartoon characters of themselves. When Rihanna came out she sang about a variety of topics. Now it seems like all of her singles have devolved into songs about sex with the occasional love song in them. When Weezer burst into the music scene back in 1994, they were just naturally geeky. Instead of trying to have some kind of bombastic or showy image, they were completely themselves. However, around the time the Green Album was released, their geekiness was heavily Flanderized. They all began dressing deliberately in geeky/outdated fashions, frontman Rivers Cuomo began wearing thick-rimmed glasses, etc. Plus, even though he's now well over 40, Rivers STILL obsessively sings about topics like snagging the sexy cheerleader goddess! Taylor Swift's music appears to have devolved from the usual country themes and stories her young audience can relate to being Take Thats to her several famous exes. Many of her album tracks have focused on other subjects than the "Taylor gets back at her exes" subject so often depicted in the media ("Back to December" places the blame solely on herself; "Mean" is about bullying; "Never Grow Up" is a celebration of childhood innocence; "Long Live" is a tribute to her backing band/stage crew/fans; songs like "Mine" and "Enchanted" show positive experiences with exes), but it seems most of the singles chosen for her albums like to emphasize her "getting back at exes"-side. She's shown more self-awareness about this in her later albums. "Blank Space" in particular sends up the image of her as the girlfriend from hell, and other tracks on 1989 recognize Swift's tendency for crash-and-burn romance ("Style," "Wildest Dreams") or blame herself for relationship foul-ups ("I Wish You Would"). As a rule, gods operate on Grey-and-Gray Morality, though of course this varied by what sect you belonged to. Nowadays we have the dual tropes of Everybody Hates Hades and Everybody Loves Zeus to make things more black and white. Those two trope names come from Classical Mythology, wherein Hades' villain status does have some basis: he's not evil, but he's grim and unfriendly, and people tended to be scared of him because, y'know, he represents death. The idea that he's Ancient Greek Satan basically takes that characterization and runs way overboard with it. Which said, the Greeks may well have started this sort of process with some of their own gods; there is some evidence that they imported Ishtar, a complex Mesopotamian goddess of love and war, and quickly flanderized her (with a side-order of Chickification) into Aphrodite, a more focused Love Goddess. In which case, the trope may well be Older Than Dirt. In Egyptian Mythology, Set was subject to flanderization before any modern people had the chance to reinterpret him. The oldest stories still have him murder and try to usurp Osiris, but of course, "killing people whom you don't like" isn't exactly uncommon for gods in any belief system; the earliest takes still present his rivalry with Horus as pretty even-handed, with the two more like Friendly Enemies who also have sex one time. As the centuries passed Osiris, Isis and Horus became more important to the Egyptian religion, and as a result, Set became more vilified. It didn't help that one of his jobs is "god of foreigners," which made him a lot less popular when Egypt was temporarily taken over by the Hyksos. He still had a role protecting Ra from Apophis, making him important to the cosmic order, though eventually even that started getting passed on to Thoth (probably because he was the easiest god to replace Set with in carvings). Again, this makes the trope Older Than Dirt. In the old polytheistic days, gods weren't characters in anthologies, they were everyday gods that you'd pray to when you needed something, or just as part of your daily ritual. So when you'd hear "Zeus," your first thought would be "king of gods, god of hospitality, law, civilization," not "Depraved Bisexual who'll do Anything That Moves in Whatever Shape He Likes." Similarly, "Hera" would inspire "goddess and protectress of women, home, family, and domestic life" not "Clingy Jealous Goddess in Sheep's Clothing." However, because now all that we have left of these gods are the stories they left behind (and what stories!), we tend to picture pretty much all gods as caricatures of their original selves. Sports broadcasters and a lot of radio personalities do this to themselves as time goes on. Chris Berman, Tony Kornheiser, Dick Vitale, Jim Rome all immediately come to mind as people that have particular quirks that are used more in more as they continue and their knowledge hasn't grown so they cover it up with their personality. John Madden is a great example of this. While coaching the Oakland Raiders, Madden was actually considered one of the smartest football coaches of his era, note  while also being as tough, loud and opinionated as you'd expect from an NFL coach. This dual personality carried over into Madden's early days as a broadcaster, where he mixed cogent analysis with silly dad humor and a bombastic personality to the general delight of sports fans. By the later stages of his career, Madden leaned more and more on the latter (regularly using catchphrases like "Boom!" and "Doink!" in place of detailed commentary), with Pat Summeral and Al Michaels often acting as straight man to Madden's goofy antics. Canadian hockey broadcasting legend Don Cherry certainly qualifies. Originally a serious, though outspoken, broadcaster noted for wearing occasionally over the top suits, he has since morphed into a loudmouthed cranky old man who wears the most garish suits known to man. When the Interstitial: Actual Play party encounters Captain America, he has been boiled down to two character aspects: an amicable fellow and a plucky fighter. "Stone Cold" Steve Austin, anyone? At the height of his popularity, he was simply a very tough working-class guy who was lashing out at the oppression of the modern world. Over time, the "underdog" side of his character became deemphasized and the "rebel" side became predominant, with the inevitable result that he devolved into an unabashed Jerkass — and the fans still cheered him! The Undertaker went from being a nigh-unstoppable zombie type of character to becoming almost literally a god of death and the occult. Briefly reversed when he became a "biker" character for a few years. Paul Bearer was no stranger to this trope either, with his voice and mannerisms getting progressively goofier over the years. Compare this early 1991 promo to this later 1994 promo . Inverted with Triple H, who started out as an Upper-Class Twit but eventually developed into a fairly normal, non-pretentious guy who just happens to be very rich. Despite the negative connotations behind Flanderization, it's been said that the best gimmicks are really just exaggerations of a wrestler's real-life personality. In The Shield, Dean Ambrose was a mercenary type character who was able to put on a normal face, but was obviously unhinged and just waiting for an excuse to hurt somebody. After The Shield broke up and he didn't bother with that normal facade anymore, he began devolving briefly into a "lunatic" character who frequently costs himself matches by trying to perform pointless stunts that only end up hurting himself. The formation of Team Hell No caused the heel personas of both Daniel Bryan and Kane to be cranked up to near self-parody levels, creating some of the most hilarious moments in recent WWE history. The audience loved their interactions so much that despite being heels who hadn't changed character at all, they got the face treatment. Paul Heyman defined the term "Hardcore Wrestling" as "Having a hard-working and dedicated attitude towards fans and the art of professional wrestling, no matter what it takes". This could mean the guy who's willing to get powerbombed through a flaming barbed wire table, but it could also mean the guy who puts on great technical matches night after night. Many outsiders missed the latter meaning and took it to mean "no rules and lots of violence". Heyman's preferred term for wrestling with no rules and lots of violence was "Extreme Wrestling", as shown by the name of his promotion. Goldberg being so known for the Spear at the expense of all his other moves that, when Gillberg missed it in his match with Luna Vachon on the January 11, 1999 WWE Raw , professional moron Michael Cole said "He only has one move." It's especially insulting since Goldberg didn't even use the Spear in his debut match. He slammed Hugh Morrus twice and then hit what would become known as the Jackhammer. Ron Simmons being reduced to saying only "DAMN!" or words that rhyme with it. Jerry Lawler becoming an unlistenable horny screeching idiot yelling "Puppies!" at the expense of everything he'd accomplished, particularly his legendary status in Memphis. Much like Lawler, The Iron Sheik becoming a profanity-spewing meme-generating caricature of himself. Even on This Very Wiki, ranting/shoot interview/memetic mutation Jim Cornette taking precedence over 1980s-1990s manager/promoter Jim Cornette. Done intentionally with Maria Kanellis. After competing in WWE's Diva Search, she was hired to be one of their backstage interviewers, but her inexperience caused her to constantly trip over her words and flub simple lines. She was eventually given the character of The Ditz to better paper over the nerves, and Kanellis to her credit, successfully pulled it off with comedic gusto and solidified herself into the business, outlasting just about all of her contemporaries in the process. Birdie in The Great Gildersleeve has more nuanced characterization in early seasons, even the occasional subplot about her life outside of her job, such as the episode where she enlists Gildersleeve's help with an auction at her church. Over the years she devolves into a one-note Mammy stereotype, who only drops in to make sardonic comments on Gildersleeve's absurdity-of-the-week. One of the many side-effects of the World Split hitting Ink City was certain characters undergoing this as a sign they were growing increasingly unbalanced. Don, for instance, is a fan of giving and receiving hugs, which he calls 'sugar'. Due to losing all his ink after the Split, he turns bright pink and can't say anything other than "Sugar sugar sugar." At one point, Open Blue 's Espartano unit went from ostensibly unisex Tyke Bomb training program to Amazon Brigade factory. Has a bit of Never Live It Down due to the main contributor just happening to prefer badass lolitas, thus inadvertently bringing the other players assume the factuality of said flanderization. They in turn started making Espartano characters using said assumption, resulting in the concept's flanderization. This was cleaned up in v5, when the new unit for the role, the Engelmacht, was explicitly stated to be unisex. PeabodySam's Garry's Mod tribute to Dino Attack RPG plays this for laughs, with every character oversimplified to their most basic traits. Rex is "a guy who was a dinosaur", Dr. Rex goes from spending three quarters of the RPG as the Big Bad to "a mad science guy who went into a thing that did something and became a dinosaur", Hotwire is a guy who lost a leg, Dust goes from a complex anti-hero to "this guy who was really cool and then he died", Kate Bishop is "a girl who cried", and Andrew is some guy who watched a movie with an alien. In Embers in the Dusk , the Chaos Polities of Tjapa are basically a Flanderized Imperium with all the fanaticism and oppression, but without the Realpolitik and the occasional Only Sane Man around. One criticism of the second edition of Exalted is that it took the interesting characters from 1e and flattened them out. Especially the Deathlords — First and Forsaken Lion went from being an interesting character who wanted to take over the Underworld and didn't care about Creation to Mask of Winters v2 who wants to CONQUER AND/OR DESTROY EVERYTHING! Thankfully, Third Edition statements seem intent on stepping away from the sins of second edition while not repeating the same things 1e did in terms of number of non-interesting or incredibly forgettable characters that both editions had. There have been a few bits released out for fiction, though if they succeeded is up to the readers. In Warhammer and Warhammer 40,000 , the Chaos God Khorne was flanderized from an incredibly bloodthirsty but moderately honorable warrior who preferred Worthy Opponents and whose servants would sometimes spare non-combatants, to wanting all blood from everyone all the time. While Khorne is still often portrayed as honorable, his followers are now almost all blood-thirsty psychopaths. Note that at no point was Blood for the Blood God! Skulls for the Skull Throne! not part of the deal. Interestingly, in the 2012 audio drama Chosen of Khorne, Khârn the Betrayer — who is often considered to be the most Ax-Crazy of all of Khorne's mortal servants in Warhammer 40,000 — was significantly de-flanderized. He's quite level-headed outside of combat, despite being prone to violent visions and impulses, and even refuses to harm non-combatants (so long as they don't touch his axe). It's also revealed that he's been killing off the remnants of the World Eaters' Pre-Heresy command structure in a bid to reunite the Legion under his own banner, which is hardly the MO of a frothing, myopic lunatic. And again in The Wrath of Kharn short story. Swell guy, that Kharn. In Warhammer 40,000 (Originally Warhammer in Space!), the Imperium of Man is continuously suffering from this trope. Every year, it seems to become more repressive, depressing, backwards, ignorant, and desperate. The Space Marines become more Knight Templar, the Imperial Guard becomes more likely to invoke We Have Reserves, and so on. The Tau, however, have received modest character development, transforming to a more complex faction and one of the few Greys in the usual Evil Versus Evil. Writers have been trying to reverse the process of flanderisation and turn them back into an authoritative and overly bureaucratic but still functional dictatorship with genuine heroes. Matt Ward has been working on Flanderizing the Ultramarines from a respected puritan Chapter of strict adherents to the Codex Astartes into the absurd force of unimpeachable and unbeatably awesome "Ultrasmurfs". At a stroke, he accomplished this mission for the Grey Knights with the 2011 codex, turning them from an interesting faction of thin-spread heroes fighting desperately against horrors which often threatened them and the entire Imperium into Big Damn Heroes who God-Mode Sue it up, curb stomp all your foes in tabletop, and not only are completely incorruptible, but can't be beaten. Game-Breaker units? Check. Incorruptible Pure Pureness that can survive alone in the Warp and isn't tainted by bathing in the blood of Sisters of Battle? Check. God-Mode Sue fluff? In spades. Before, Grey Knights were earlier well-liked by fans, who still used them even with a codex that was a bit out of date. The Necrons originally had vague allusions to ancient Egyptians, and their fantasy counterpart, the Tomb Kings. Come 5th Edition, they're all wearing pharaoh hats and wearing gold and blue jewelry while wielding sickle blades. Also done by Matt Ward. The players received massive Character Development in the process. That's because Games Workshop were savvy enough to control Ward's antics this time with at least two people getting the job of simply checking he does it right and stopping him when it goes awry. It worked. Of course, it also resulted in the complete destruction of the Necron's pre-fifth-edition backstory, but hey, you can't win them all. To say nothing of the fact that, with Necron pre-5e backstory being a race of mindless Omnicidal Maniac Skelebot 9000s controlled by evil Energy Being Physical Gods who feed on bio-energy, there wasn't that much of a backstory to lose. There have been some attempts to reverse the Flanderization of the Imperium in the tie-in novels, most notably the Horus Heresy books and, of course, Ciaphas Cain, HERO OF THE IMPERIUM!!! A smaller one, but no less hilarious, is the Flanderization of weapons and equipments in the Codexes. Space Wolves don't have "lightning claws", they have "Wolf Claws". Blood Angels don't have Power Fists, they have "Bloodfists". And almost every single melee weapon available to the Grey Knights are now some flavour of "Nemesis Force", although that is somewhat justified due to them being previously named under the blanket "Nemesis Force Weapons", meaning that there is simply more of a distinction now rather than a full rename (unlike the aforementioned Bloodfist, which is functionally identical to Power Fists). There's also the "Artifacts" section of newer Codexes, meant to be a selection of rare weapons and equipment that can only be selected once-per-army. However the naming gets silly with the Tyranids, who have "Bio-Artifacts" that somehow grows on their body but are still "artifacts" in that they're unique in the universe. The Inquisition has been partially Flanderized (at least among the fandom) in regards to their usage of Exterminatus. In-universe, Exterminatus is meant to be a last resort when there is no hope of reclaiming the planet from something like Chaos' corruption or a Tyranid invasion, and the Inquisition is very careful to make sure that there is absolutely no alternative before sacrificing an inhabitable world (a very finite resource that the Imperium can't afford to waste); in the fandom, simply mentioning heresy in any way, shape, or form is enough to warrant the planet being nuked into oblivion. Half-Orcs in 1st edition Dungeons & Dragons were originally just ugly anti-heroes with a bonus in Strength — in fact, they were actually characterized largely by their aptitude for stealth and sneakiness, with their favored racial classes being the Thief and Assassin. By the time 3rd edition came around, they, along with their full-blooded orc kindred, became strong but stupid brutes with penalties to Intelligence as well as Charisma, thus limiting their utility as player characters to be much else. They also became associated with the new Barbarian class, which basically doubled down on the Dumb Muscle angle. Although 4th and 5th edition dialed back considerably on the "stupid brute" aspect, they're still mostly associated with being Boisterous Bruisers. Likewise, when first introduced in the 1st edition "Vault of the Drow" adventure module, it was only the drow nobility who were demon-worshipping evil monsters. The common drow had a default alignment of True Neutral ; not really buying the demon-worship or enjoying it like their superiors, but reluctantly going along with it in order to avoid ending up on the sacrificial altar themselves. In fact, commoner drow could become allies to the party during that adventure. Although some later works do hint that common drow tend to be more cynical about Lolthism and more likely to be trustworthy, in comparison to the infamously treacherous noble houses, as a whole, the drow are considered Always Chaotic Evil. The gnolls are a race of Monstrous Humanoid Hyenas who went from merely having been taken over by worship of the Demon Prince Yeenoghu after he stole them from their uncaring creator-god to having been directly created by him in 4th edition. At the same time, though, the trope was Zigzagged in that this same edition notes that despite this demonic taint (gnolls being either hyenas that scavenged Yeenoghu's kills or hyenas he forcefed demons to), gnolls were not an Always Chaotic Evil race and that many packs or tribes instead turned away from their demonic heritage to embrace their natural origins and live as hyenas with human-level intellects would do. Such gnolls were hardly fluffy bunny rabbits, but they weren't the cannibalistic slave-taking marauders that Yeenoghu's worshippers were. Then came 5th edition, when their 4e fluff was flanderized: now, all gnolls are essentially nothing more than a particularly weird kind of zombie, being living avatars of Yeenoghu's hunger to devour all life and having essentially nothing in common with natural life at all. Kenku, a race of humanoid ravens, were never a particularly well-developed race in D&D, being defined almost entirely by their obsession with stealing things. But, 5th edition still managed to do this to them: in previous editions, kenku were famous for their skill at vocal mimicry, and their use of mimicry to create elaborate code languages. 5e rendered them completely incapable of innovation; they can't even form their own language, and instead mimic other noises because they're mentally incapable of speaking in any other way. St. Cuthbert became this somewhere in 3rd Edition. In his original appearances in Greyhawk , he was characterized as a Lawful Good god who leaned towards Lawful Neutral, with a portfolio consisting of wisdom, common sense, dedication, and truth, who hated evil but prioritized order first and foremost. He was suggested to one of the most down-to-earth gods, and highly resentful of Knight Templar types like Pholtus — though he was blunt, and cared about converting others, he wasn't into killing people for not worshipping him. Somewhere along the line, it was realized he was the most prominent Lawful Neutral-leaning deity in the setting, and he ended up losing his kinder traits while absorbing a lot from Helm and Pholtus, to the point that "worshipper of St. Cuthbert" became a byline for "stick-in-the-mud Jerkass Church Militant who wants to convert everybody by force." Happened in-universe in Mage: The Ascension . During the Avatar Storm crisis, a detachment of The Technocratic Union became stranded and warped by the Void. The result? Their role in The Technocratic Union consumed everything else, including their humanity . For example, The Progenitors who wish to advance humanity by biological science now wants to grant perfection to all humans by turning them all into a soulless Hive Mind, and Iteration X who sees cybernetics as means to an end now is more machine than man. They are know known as Threat Null, and they want to go back. Happens to a number of recurring characters in Ace Attorney . The Judge seems to get dumber and become more of a Cloudcuckoolander every game, as does Gumshoe (though Dual Destinies dialed the Judge's ditziness back a little). Additionally, Larry progressively becomes more of a loser, and Wendy Oldbag becomes more of a jerkass unrepentant stalker with every appearance. Winston Payne, a prosecutor who is meek and easily intimidated, became much more arrogant while also becoming a lot more spineless in later games. His brother, Gaspen Payne, takes these traits farther with a mix of Jerkass in the 3D installments. Pearl Fey went from being a child who thinks Maya and Phoenix are a couple and gets mad whenever Phoenix is talking to another woman to being a child that outright slaps Phoenix for even being near another woman in front of Maya. Maya explains that the reason Pearl behaves that way is due to her seeing her hometown's sky high divorce rate, ergo she wants to make sure Maya and Phoenix never separate or see other people. Pearl would ditch this trait once she got older. Trucy Wright's debut appearance depicts her as an incredibly clever, sharp-witted girl with deep-seated emotional troubles that she conceals beneath the guise of acting like a peppy, quirky teenage magician. In the sequels, she would instead become a quirky, airheaded magician incapable of holding a conversation without twisting its subject back around to talking about magic. The Fruit of Grisaia : Michiru started off fairly dumb and intentionally obnoxious but had both a serious side and played a rather crucial role in making everyone get along without either killing each other or collapsing from overwork. By The Eden of Grisaia this aspect of her has almost entirely disappeared, leaving her as nothing but the butt of jokes or someone that has no clue what's going on. In the common route, Yumiko is portrayed as an intelligent yet antisocial Bookworm with some Covert Pervert tendencies and a Sugar-and-Ice Personality. Later in the story, her perversion becomes more and more apparent, reaching its pinnacle when she tries to sexually assault Yuuji in his sleep. Her Tsundere trait goes through the roof, especially in Yumiko's own route, and for this very purpose she goes back to hating Yuuji's guts right after the route starts even though we saw them having a friendly conversation many times before. Her intelligence and level-headedness gets undermined by many stupid decisions, repeating the same mistakes and even not being able to perform simple housework. The worst moment would probably be when she called her father after her escape and wanted to give herself in, because she didn't want to cause Yuuji any more problems, even though he told her multiple times he was helping her of his own will. In effect, she actually caused more problems: she immediately realized her mistake and hung up, but has already given away their location. The Sakura series by Winged Cloud suffered from two kinds of flanderization. One, it went from having fairly tame fanservice to having graphic sexual content. Starting from Sakura Maid, the series started gradually refusing to make an All Ages version. Two, the series started out with a male main character falling in love with the female characters, who also have feelings for each other, to having only a female main character falling in love with other female characters. Played for Horror in the "Darkest Timeline" route in Monster Camp . In the first event, the characters are acting normally, even showing off some Hidden Depths; in the second, they've basically regressed to being defined by only one of their personality traits (e.g., Milo suddenly becoming an obsessive Selfie Fiend, Dahlia refusing to talk about anything but her muscles and/or fighting, etc.), in addition to having more of an interest in figuring out the logical solutions to problems. The third event ultimately reveals that Calculester replaced everyone in camp with robots as a solution to the perceived lack of logic his friends demonstrated; the Flanderization was a result of him trying to program their personalities into the robots. Among Us Logic : Veteran goes from mildly dim and somewhat naive, but still experienced and intelligent, to a total idiot with some utterly bizarre ideas. Captain goes from well meaning yet narcissistic, to basing his entire personality and life around Player and their "friendship". Noob from Battlefield Friends started out as simply making rookie mistakes and only suffering an occasional misunderstanding. However, he quickly began to get dumber and dumber, to the point where, in the second season, he barely understands even core gameplay concepts and is a complete liability to his own team. Lampshaded by Engineer. It's like he's getting worse Engineer:! In Freeman's Mind Gordon Freeman started off as somewhat selfish and arrogant but he still sometimes tried to help people in trouble and sometimes ranted about odd things, though these rants were always on topic. By the beginning of Season 2, Freeman views anyone that isn't himself as below him, goes on utterly deranged rants disconnected from what's going on, and doesn't mind ditching people to fend off the alien invasion on their own. Justified, since the invasion of Black Mesa has caused him to undergo major Sanity Slippage and brought out his worst qualities. The GoAnimate "Grounded" videos have evolved into complete and utter Black Comedy over time thanks to this trope. Originally, the grounding videos were more or less parents grounding their kids and not really liking it at all. However, as time passed on, the grounding reasonings got more and more ridiculous and, thanks to people like Issac Anderson, the creator of "Punishment Days", became more violent, with the characters changing to fit the mood. Homestar Runner : The title character was Flanderized from The Fool into The Ditz, while the King of Town's unpopularity and Bubs's tendency towards dodgy dealings were also blown out of proportion. Coach Z started as a relatively down-to-earth character with a cartoony but not incomprehensible Midwestern accent, who was occasionally capable of dispensing good advice despite some hints of a troubled private life. In later toons, he's shown to have a lot more than just two "prablems", and his accent evolved into a bizarre tendency to mispronounce any and all words in strange ways, sometimes to the point of bordering on The Unintelligible. Strong Sad actually seemed to go through reverse Flanderization, going from a rather one-note downer to an artsy and snarky Only Sane Man. Several characters in Neurotically Yours have undergone Flanderization: Anchovie, AKA The Pizza Guy/Dude, was a young man that was deeply in love with Germaine, even though she always rejected him. He slowly became a stalker and even borderline rapist no matter how many times Germaine rejected him. Germaine's sexual fantasies and desires were exposed more and more up to the point where Germaine relished it and became a hooker. The reboot arc eliminates the sexual shtick but later on dived right back into it and done it even harder than before. The Hatta' was Foamy's Token Black Friend with a hint of black stereotypes. The Hatta' is now a walking stereotype of black characters and claims everyone is a racist except him. Red vs. Blue is a good example of why Flanderization isn't necessarily a bad thing: in very early episodes, the characters are all quirky, but mostly sane enough to function in real life. Within the space of the first season, they grow increasingly zany, with increasingly hilarious results, and it's doubtful the show would have become so popular otherwise. Donut starts as a somewhat wimpy rookie who is unfortunately assigned pink armor. He at first despises and insists it is "lightish red" but later on seems to embrace that armor, becoming full-fledged flamboyantly Ambiguously Gay (or extremely, extremely Camp Straight, depending on your interpretation). He still wears lightish red armor, though. Caboose's childish incompetence and naivety becomes insanity and nearly reality-warping levels of stupidity. Unlike most of the others, his is actually given a reason, in that O'Malley's insanity played a part in his devolving, as did the explosive firefight in his head when Tex and Church drove him out. Sounds like [O'Malley] took some of the furniture when he left. And the carpet. And the drapes. And I wouldn't expect to get that deposit back, if you know what I mean. Sarge: Simmons changes from occasionally kissing ass to displaying extremely sycophantic behavior towards anyone in a position of power ("You're not only a wonderful leader but also a handsome man, sir!"). Though said characterisation was mostly abandoned as the series went on and he was instead played of as a Geek with a Surrounded by Idiots attitude towards Griff's laziness and Sarge's Insanity. Sarge's dislike of Grif progressed to actually trying to kill him on a fairly regular basis. Like Simmons it got toned down in later seasons though and his Colonel Kilgore tendencies took center stage instead. Grif himself started as the most competent member of the Reds with occasional references to slacking off (most likely because his work would have been utter nonsense anyway). This evolved into extreme sloth and gluttony. Tucker, who talked about "picking up chicks" in the first few episodes, became a literal font of innuendo and a straight up Casanova Wannabe as the series went on. Tex went from a skilled and amoral special forces soldier to a legendarily powerful badass, especially after the Blood Gulch Chronicles when the show began using CGI to animate fight scenes. Church, however, remained roughly as grouchy and cynical throughout, perhaps actually becoming more complex as time passed. Except for his aim. In early episodes, he narrowly (albeit constantly) misses with his Sniper Rifle. By season 6, he manages to unload an entire clip at a guy barely a foot away without hitting him even once. Doc. He started out as a conscientious objector but had no true defining behavioral quirks. Quickly he became a useless wimp (to the point that he reveals he ran track in high school because it was the least competitive sport he could find) and pacifist, serving as a hilarious counter-balance to O'Malley's aggressive ranting. SMG4 : SMG4 started as Mario's foil, but got flanderized into being "the meme man" who was primarily concerned with internet fads and such. The "Super Meme Genesis" arc seems to have undone it to a degree by giving him a backstory and a fleshed out purpose in the world's setting, even if memes are still a big part of it. Mario has always been an idiot in the series, but in the earlier years, he was still good-intended and rather came up with insane ideas and expressions, rather than being a deliberate nuisance, which is what made him likeable in the first place. Over time, however, he has been flanderized into a screaming, insufferable manchild, throwing childish tantrums whenever he doesn’t get his way, and turning into a selfish, annoying jerkass, pestering his friends just to get what he wants. Princess Peach began as the voice of reason before turning into a naggy housewife figure and eventually devolving into just screaming in aggravation at whatever was going on to the point of barely having dialogue. Fishy Boopkins went from being a socially awkward loser to an anime-obsessed weaboo as of "High School Mario". Meggy started out as a competitive, hotheaded girl who was more athletic and skilled than most others, but still had a more down-to-earth side, and was kind to those around her, especially towards Mario, before eventually falling victim to flanderizarion. Her competitiveness turned into here being an arrogant, spotlight-hogging jerkass, her skillfulness evolved her into becoming the most flawless character in the series who’s rarely in the wrong, even when she should be, and her tolerance with Mario deteriorated into her literally wanting to kill him over the slightest inconveniences. Most of these new traits are a result of her transformation into a human character, and this ended up dividing her among the fanbase even further. In fact, pretty much all of the female characters suffered from this. They all went from more ‘down-to-earth’ characters that were more mature than the rest, but still made the odd mistakes and joined in with the fun. Now all of them are immune to slapstick, always have to carry the entire team, and are always presented as the mature ones of the group, making them stand out like sore thumbs in a comedy-based series like SMG4 (though Tari is less guilty of this, as she is typically the one more involved in the humour) Speaking of Tari, she started out as an introvert gamer-girl, who was shy about meeting new people but was at least willing to help her friends out against any threats, like at the end of the Waluigi Arc. Fast forward to today and now she has become a pacifistic coward, more so than Luigi ever was, crying at the smallest inconvenience and dragging down any fun scene by being scared (which is played straight, unlike with other characters) or wanting to end the violence. In addition, her relationship with ducks first started out as a minor hobby, in 'Mario The Ultimate Gamer' but soon turned into an obsession that dominated her 'gamer' personality. Honestly, even the entire tone of the series can fall under this trope. It started off as an off-the-walls comedy show, starring a parody of Mario surrounded by a cast of wacky characters, who all played a part in the humour. And whilst the odd dark or serious element was added on occasion, it was typically Played for Laughs so it never did too much to detract from the comedic vibe. Since then, the show has taken a drastic change of tone, including serious moments more often, playing them completely straight to the point where all the fun has to stop to focus on it, and making it seem like Mario is now the immature manchild surrounded by more mature adults that have been flanderized into shells of their former selves, as well as abusing him for the slightest annoyance rather than joining in on the fun. In Kirbopher's Super Freakin' Parody Rangers series, the Rangers themselves are basic Flanderized versions of the original Mighty Morphin' Power Rangers: Meat, the Red Ranger, is an extreme sports nut who continually flexes his muscles; Willy, the Blue Ranger, is a short and stereotypical nerd; Pinky, the Pink Ranger, is a stereotype of all things girly; and in reference to the original Black and Yellow Rangers, Mace the Black Ranger and Chan the Yellow Ranger are, well, an African-American and a Chinese girl, respectively. Team Service Announcement: In Unlockable Weapons , the BLU Soldier is a competent if overoptimistic fighter who got cornered by a Scout. In Grenade Launcher he's Too Dumb to Live, willfully standing on a sticky-mine trap. In 8-Bit Theater , this happens to at least four characters. Black Mage goes from merely sadistic and murderous to a full-on Omnicidal Maniac, Fighter goes from being gullible and dim to being stupider than the furnishings (although given the number of times he's been stabbed in the head this may technically constitute Character Development), Bikke goes from being a bit dim to making Fighter look like a genius, and King Steve goes from incompetent and callous to being a crazed tyrant who acts entirely at random. Achewood 's Cassandra "Roast Beef" Kazanzakis is an interesting case. He didn't have a personality to speak of to begin with, but around the party arc we learn that he is depressed and borderline suicidal. Shortly after that the trait began to dominate his personality, though despite this high focus on his depression he remains a rather multifaceted and interesting character. David from Bittersweet Candy Bowl . The author originally intended him to be far less weird and wacky than his later appearances suggest. Pretty much everyone on Bob and George. Mega Man and Bass go from being a bit ditzy and out-there to having full disconnects from reality, Dr. Light goes from being a competent though slightly uncaring scientist to being a vaguely insane drunk, Bob goes from a jerkass to an omnicidal maniac willing to destroy the universe he's in because it's there, and George goes from having a few issues with the casual use of violence to solve an issues to being a full-on Straw Pacifist. Ethan in Ctrl+Alt+Del began as The Ditz, but moved on to Cloud Cuckoolander. More recently, he has surpassed this, and some fans are starting to suspect he is in fact clinically insane. (This may explain why Ethan made a Heroic Sacrifice to prevent a dystopian future from happening, and the entire strip is going to be rebooted.) Szark Sturtz from Dominic Deegan was originally a master swordsman and a sadist. Following his Heel–Face Turn and admittance to having a crush on the title character, he eventually became "Szark (who is gay)", according to one forum that follows the comic. Siggy's racism. Quilt's stupidity. Dominic's ability to plan ahead. Luna's bids for independence. Dex's timing for Big Damn Hero moments. In El Goonish Shive , Grace's bubbly innocent side was gradually cranked up while her role in the story was reduced until she appeared to be little more than a Plucky Comic Relief character. Later arcs revealed she was putting up a bubbly front to cover some serious inner conflicts. Girl Genius has two cases, both in-universe: Punch and Judy are two constructs created by the legendary Heterodyne Boys, and helped them save countless people from mad Sparks. Punch was mute but intelligent and kind, known among his friends for building toys for orphan children, while Judy was severe but compassionate, and eventually retired to become a teacher. Pretty much every story of the Heterodyne Boys reduced them both to the status of Butt-Monkey and Plucky Comic Relief, and they're generally just referred to as "dopey monsters." When Jägermonsters who actually met them talk to the actors who play them, the actors are aghast. One tries to get permission to have the character dictate a treatise on the dignity of man, but he gets shot down by his boss. Klaus Wulfenbach is the feared emperor and dictator of Europe, ruling with astonishing competence and surprising compassion backed up by an iron fist when people don't cooperate (which is often—most rulers are Mad Scientists, after all, with emphasis on the mad). Yet whenever he appears in the Heterodyne stories, he is usually portrayed as a braggart, a coward, and a Butt-Monkey. The only reason why the storytellers and actors keep their heads is because Klaus secretly finds this portrayal hilarious. Grim Tales from Down Below has Grim, the personification of death, speaking in a slightly garbled manner in the beginning but it begins to get more and more difficult to understand as the story moves on. Homestuck : In the Act 6 Intermission 3 walkaround games, some of the Pre-Scratch Trolls are revealed to be Flanderized versions of their "dancestors." In particular, Cronus Ampora reflects all of the worst, most recognizable traits in the fandom's vision of Eridan (i.e., his romantic troubles, douchebaggery, and hipsterism), while Meulin Leijon is an exaggeration of how the fandom sees Nepeta (i.e., shipping and cat puns). Aradia is an interesting case, in that the trait that's been exaggerated is one she didn't actually possess at the beginning, but picked up after coming back to life. She went from a cheerful person with a couple of minor death-fangirl quirks to spending entire scenes bearing a truly demented Slasher Smile and generally being more than a little creepy. While Feferi and Nepeta were rather cute in their early appearances, Act 6 has them do nothing but stand around and do adorable things in the background while smiling. This is taken even further with Fefetasprite, which upped their new status as Those Two Guys by combining them together and (mostly through Roxy) painted the sprite as a "poor, sweet, precious" person who didn't deserve what happened to them. This gets reverted later, as both of them eventually have speaking roles, in The Homestuck Epilogues (either the Alpha Timeline Feferi or an offshoot; it's never made clear) and right after the retcon, through Nepetasprite/Davepetasprite^2, respectively. The Wayward Vagabond also gets hit with this post-Cascade, where his Cloudcuckoolander traits (such as his obsession with Can Town) are turned up to eleven, practically turning him into a Manchild. Least I Could Do , from the same creative team, has seen this happen to most of the characters, but it's particularly noticable with Rayne, whose childlike obsession with Star Wars and other geek properties and 12-year-old boy-like obsession with getting laid have basically consumed his personality, to the point where it's a surprise when he acts like an actual person, or even gets something accomplished, other than weirding people out with his desire to be Emperor or getting laid. Richard from Looking for Group was always intended to be an Always Chaotic Evil insensitive dick and main comic relief, but his antics in later parts have done nothing but break the pace of the story. And then Subverted when the whole thing turns out to be a coping mechanism over the fact he is forced to be evil. Defied and discussed in Manly Guys Doing Manly Things . Sten gets tired of the cookie jokes real fast. Ménage à 3 is somewhat prone to this: Gary manages both Character Development and flanderization, simultaneously. On his first appearance, he's a quiet geek who is still a virgin due to severe problems in dealing with women. However, he is also able to assert himself a little when pushed, and he sometimes demonstrates flashes of insight or snark. Over time, he loses his virginity, and gains much more confidence in dealing with women (presumably because he's often surrounded by attractive women who sometimes want his sexual services). However, his geekiness becomes worse, if anything — he drags a non-geek date into a comics shop at least once — and his quiet nervousness decays into total spinelessness and passivity, which the comic lampshades on occasion. DiDi's sexual frustration becomes more and more prominent over time — though this may be considered a matter of Character Development, as she is becoming more desperate. Also, her near-total inability to process ordinary social cues maybe gets worse over time. James rapidly goes from vaguely liberal to Straw Feminist to displaying weird hang-ups for the sake of the plot. Muh Phoenix : Almost everybody. Scarlet Witch really resents that people only remember her for depowering the mutants and being Robosexual. Miko from The Order of the Stick was always a bit excessively-Lawful and stubborn as a mule, but she was in fact capable of reason and being talked down, and while she could jump to conclusions, it wasn't a defining trait of her character. After 80 strips Out of Focus, though, she came back as a completely Lawful Stupid caricature who constantly jumped to conclusions and was completely incapable of understanding that she might, in fact, be wrong about them. In-universe example in Ozy and Millie , where Llewellyn's horns exaggerate character traits when worn. Ozy becomes so passive he goes a whole day without moving, Millie becomes so mischievous she tries to cut off Ozy's tail, and Avery does nothing but gush about how awesome he is after putting them on. Penny Arcade has two main characters, both pretty heavily Flanderized. Tycho went from a slightly meaner than normal, slightly smarter than normal person with some issues to a psychotic genius with an abused past and a fetish for long-necked animals like giraffes. By contrast, Gabe went from being "Not as smart as Tycho" to being portrayed as stupid enough to glue his hands to his face and not quite understand how children are made (as a 30-something man WITH A CHILD at this point). On the other hand, the characters were somewhat bland and interchangeable in the earliest days, so this ended up making them more distinguishable. Inverted in Precocious (sort of). Most of the minor characters are introduced pre-Flanderized. Played straight, however, for Shii Ann Hu. Mentioned by trope name in the alt-text for strip 237 . The Perry Bible Fellowship deconstructed this trope in a particularly dark installment, "Gee Golly Jeepers," in which the actor of an obviously flanderized sitcom character — looking very much like the later manifestations of this page's Trope Photo — hates what his alter ego has become so much that he offs himself. It was an expression of Creator Breakdown, as the author had started to fear his own work becoming flanderized. Questionable Content : Hannelore, shortly after her first appearance, mentioned that she had severe OCD. Over time, she developed more and more quirks and phobias to the point of being essentially a female Adrian Monk (she even had a "sex" dream about him, where they cleaned together in the nude). It wasn't long before they had to Hand Wave the fact that she even has piercings, and the circumstances of her first appearance — loitering in a public restroom, smoking and nonchalantly talking to a man peeing in the sink — have become absolutely inconceivable. (In fact she wins a massive bet with her wealthy mother by simply agreeing to briefly touch the toilet in a public restroom!) The problems had to be explained in Comic 1046, where Hannelore reveals she's always had these problems; it just varies by the drugs she takes. Also from QC is Raven. At first she was an easy-to-rile goth stereotype who was not the brightest bulb. Then she reappeared as a Perky Emo. Then she was a little bit of a Genki Girl with rare flashes of wisdom and occasional casual sex. More recently she is a flat out bizarre Cloudcuckoolander (even by the standards of Cloudcuckoolanders), and has probably gone around the block an innumerable number of times. Even Pintsize to an extent. Originally he was just a quirky, sociopathic robot with weird fetishes. Now he is just /b/ personified doing anything for attention. Leo from VG Cats was at first a typical Cloud Cuckoo Lander whom, despite some unusual quirks, still made sense at times. This got worse, much to the discomfort of his co-star, Aeris. Now he is effectively a textbook Ditz. Widdershins sees this invoked by Luxuria, an incarnation of Lust. He picks out six other humans who are inclined towards another of the Seven Deadly Sins and manipulates them to be dominated by this emotion, to make them ideal vessels for his fellow spirits. Whomp! and its titular chacter, Ronnie. Starting off the series as an introverted, geeky and somewhat overweight otaku, all of these characteristics get exaggerated to comic proportions over time, until he is literally unable to interact with people in a normal manner, his eating is out of control and his Japanophilia borders on manic obsession. A lot of people probably don't realize that the original "Caturday" pictures (now known as LOLCats) were captioned in proper English. They were still funny, because the photos were inherently bizarre, like photos you might see in magazine caption contests. Now it's escalated to the point where any photo of a cat combined with bad enough English is supposedly hilarious. SF Debris delights in taking Star Trek tropes and characters to their extremes. These range from running gags in fandom, such as Janeway's coffee obsession or Troi's bad piloting skills, to total re-imaginings such as: As Trek's Stock Aesops age naturally over time, The Federation is increasingly depicted as an Orwellian state. Probably most famous is Chuck's interpretation of Captain Janeway as a supervillain on par with Palpatine and Dr. Wily. This originated with Janeway's inconsistent characterization over the course of the series, when viewers were expected to blindly side with her despite the writers disagreeing over how she would act from week-to-week. Almost as famous is Harry Kim's sexual confusion and Butt-Monkey role. It's worth mention that this and Janeway's mental illness are regarded as Word of Saint Paul by the actors themselves. Captain Archer as a deranged homeless man who was abducted and put in charge of a starship. His resentment of the Vulcans, a running theme on Enterprise, has been inflated to cartoon-level paranoia. Captain Archer: I told them I told them I TOLD THEM the Vulcans you can't trust the Vulcans they run up the flat to the back of the dragon and hold their tails so you can't fly no more and then you can't know your thoughts no more because they've already stolen the wrench to your mind... Tales of MU does this to gnomes (its version of hobbits) to a certain extent, when comparing the species to the one from Middle-Earth. The latter are respectable to a fault and don't think much of people who travel too much or have adventures — the former literally consider "adventure" a dirty word and take pains to use an Unusual Euphemism. What I mean to say is that she's an... a lady of wandering interests [...] You know, prone to seek out, ah, random encounters. Hazel: Pretty much any sort of Abridged parody will have a few characters be only defined by one character trait from canon. TV Tropes: The Self-Demonstrating Character Pages on this very wiki are prone to this, which is outright acknowledged on the Rick and Morty page. For example, the page for Captain Falcon uses excessive Intentional Engrish for Funny and Bold Inflation due to his Gratuitous English and Large Ham tendencies in the Super Smash Bros. series; in a similar vein, the The Incredible Hulk page has the Savage Hulk's every word in Bold Inflation and seems unable to depict him note  in any mode other than "angry as all hell". Hell, the allcapsed and bolded Billy Mays page even acknowledges this in its tropes. The Angry Video Game Nerd started out as a jaded sort of fellow who would only start dropping F-bombs when the game truly deserved it, and concentrated more on the reviews themselves. As the series continued, the Nerd would collapse into screaming apoplexy at the slightest provocation, game-related or not, and the shows gradually got more and more taken over by movie-like set pieces and Large Ham supporting characters (this was about when Mike Matei got an expanded role). The Nerd's preference for vintage stuff has also been exaggerated, to the point where he seems to be nearly completely out-of-touch with modern-day appliances with apparently little to no knowledge of newer games, in addition to apparently even using an old Commodore computer and vintage cell-phone on a regular basis. In earlier videos, he was implied to be at least moderately familiar with modern games as he mentioned games that were modern at the time every now and then, and in heavy contrast to his later Disco Dan characterization, was even shown using a current-day (at the time at least) cell-phone in a few episodes (specifically "Who Framed Roger Rabbit?" and "Ghostbusters"). The YouTube user Benthelooney has suffered this in his videos. Originally, on his old PuffyZillaman4 account, he was consistently calm and laidback, despite several rants. However, enter later half of 2010, and he becomes a lot more loud and aggressive in tone. This is possibly to add effect to his rants however. Another thing that has been flanderized about him, is his opinions. For example: In his "Nickelodeon" salute, he didn't exactly say he hated Rugrats , but he didn't think that it aged well, but it was good for nostalgia purposes, but in a later video, he says he hates it, and that it's not even good for a "nostalgia trip" and was baffled that many people his age love it. The same could be said for his opinions on Rocko's Modern Life and Scooby-Doo . To add to this, he has received another Flanderization, when it comes to his opinion towards Pixar. In his older videos, he enjoyed their first couple of movies but wasn't too happy about their later ones and Pixar winning the Oscars. Now, it has gone to the extreme, of him actually hating Pixar, and thinking that they have ruined animation. He has expressed hate for them in several comments. He even flat-out ranted on Pixar (He apologized for it later). However, in occasional videos, he realizes this and tries to go back to his 2009-2011 persona in occasional videos. Ben Tannehill (the web-show's creator) himself is pretty much the opposite of the character himself in real-life, as he is actually very nice. Brows Held High : In an odd example, Oancitizen tends to be more comically pretentious in crossovers than in his regular reviews. Chubbyemu: while earlier videos from often featured people who face serious consequences from genuine mistakes such as petting a cat, or eating leftovers, or a scientist briefly being exposed to methylmercury, recent videos tend to focus more on overtly ridiculous and foolish behavior, such as eating silica gel or drinking glow sticks, which oddly result in no serious consequences. In Courier's Mind: Rise of New Vegas , The Courier accuses Caesar's Legion of flanderizing the Roman Empire, citing that they got their military practices down to pat but saying they don't have any right to call themselves true successors to Rome until they demonstrate great works of architecture and poetry on top of that. Filthy Frank started out as an angsty nerd type of character with a mild tendency to say rude and edgy things, but as the series went on his rude, edgy, vulgar, politically-incorrect tendencies gradually became more and more prominent until they eventually consumed his entire character, with his angsty, nerdy personality being pretty much entirely phased out. In addition, he eventually evolved into a full-blown evil, politically-incorrect Jerkass of a Villain Protagonist who frequently said and did immoral and offensive things merely for the sake of being edgy and evil. Gaming Garbage hosts Rich "Lowtax" Kyanka and Dave "Shmorky" Kelly" started out utilizing deadpan, casual deliveries before Lowtax took several levels in ham and Shmorky settled into a hyper-effeminate Uke personality with a frail, high-pitched voice. Hitler Rants tends to reduce Hitler and his high command to what they were doing (or what it might sound like they're doing to people who don't speak German) in the scenes parodied, plus some extra insanity. Hitler... well, rants at things, which can range from Fegelein's latest antic to current events to the failure of his latest stupid plan. He also occasionally takes calls. Fegelein pisses off Hitler. In the original movie, this was because of his desertion. In the videos, it's because he is an 'Antic Master' who keeps pranking Hitler. Gunsche is the Bearer of Bad News, because he was the one to tell Hitler about Fegelein's aforementioned desertion. Jodl is bald and objects to things. Jodl in the movie (and real life) was a Yes-Man, but since he raises an objection during the famous ranting scene, he has now become the go-to objector in the bunker. Burgdorf is an alcoholic, and probably Krebs' boyfriend. Krebs is obsessed with fish and pointing at maps, because in the original rant scene he points at a map when discussing defense plans and says something that sounds a bit like "fish" ("Bewegungsunfähig"- meaning immobilized- but said fast and somewhat slurred). Goebbels looks like Skeletor and is Hitler's Yes-Man (which, to be fair, is historically accurate). Goering is fat and likes eating things. Jake from College Humor's Jake and Amir went from being a regular guy having to deal with Amir's antics to being somewhat of a Jerkass, and Amir became less and less sane. JonTron was originally an AVGN-ish review show. Similarly to the Nerd's anger, Jon became more and more wacky and less down-to-earth as videos continued. After he quit Game Grumps, the videos became even more wacky as a result of his time there. This has worked out well, as his recent videos are some of the most-liked ones. Hugo, one of Matthew Santoro's clones, started out as being mildly stupid, but later became insane. Miranda of Miranda Sings started out as a fairly believable Stealth Parody of amateur singers on YouTube who are deluded about their talent before her singing, fashion sense, and overall attitude slowly started getting more and more over the top. Compare this to this . Colleen Ballinger, the creator of the character, says she was deliberately exaggerating whatever traits were most derided in the comments section in order to make her more annoying. And due to Poe's Law, some still seem to not immediately get that she's a fictional character. A couple of characters from Noob got this: Sparadrap started out as a Noob in pure form: annoying, not listening to people giving him advice, overestimating his talent and the ego that comes with it. Over time, forgetfulness and sheer idiocy became the only reasons he was doing anything wrong. That stupidity led to a tendency to be friendly in situations that called for battle that expanded to other domains, eventually making him into a Stupid Good Manchild. Gaea started out as the Audience Surrogate, then hinted that accumulating in-game currency was a bigger priority to her than other things, which makes sense for a new player. Her means of raking in more money have gotten more and more elaborate over time, to the extent that she's now known to be a greedy Manipulative Bastard by any of her acquaintances that are not a Horrible Judge of Character. SuperMarioLogan Brooklyn T. Guy, the firefighter from Stuck in a Tree originally was helpful but dumb, as he got distracted by a phone and failed to save Toad. However, starting from Bowser Junior's Summer School 5, he is a Deadpan Snarker, and keeps working for more jobs as time progresses. Rosalina, in her first two appearances, kept screaming and crying about Luma, and blames the Sun, which she believes is a planet. However, in her current appearances, she became more of a stereotypical ditzy blonde. In Vaguely Recalling JoJo , Jonathan's signature move is the revolver he uses on Dio when Dio is a vampire. Will A. Zeppeli repeatedly uses the frog punch on his foes. At one point, he does the frog punch on Dio's zombies, and giant fists rain down upon them. The grenade deflection attack is one of Straizo's signature moves.
31
All-in-one quantum key distribution system makes its debut
Sydney | University of New South Wales
4
US and allies announce sanctions against Chinese officials
The US announced sanctions Monday against two Chinese officials for “serious human rights abuses” against Uyghur Muslims, a step coordinated with allies including the European Union, Canada and the United Kingdom, which imposed sanctions on the same individuals and others, the Treasury Department said. The announcement was part of a broader show of unity by the US and its international allies, all voicing condemnation for Beijing’s repression of Uyghur Muslims and other ethnic minorities in Xinjiang province. In a carefully orchestrated series of statements, the US and allies in Europe, North America and the Asia Pacific created a unified show of force, announcing sanctions and issuing condemnations seemingly meant to isolate and pressure Beijing. The EU announced its own sanctions, followed by the US designations and then a joint statement from US Secretary of State Antony Blinken and the foreign ministers of the so-called Five Eyes intelligence alliance made up of the US, the UK, Australian, Canada and New Zealand. “The evidence, including from the Chinese Government’s own documents, satellite imagery, and eyewitness testimony is overwhelming. China’s extensive program of repression includes severe restrictions on religious freedoms, the use of forced labor, mass detention in internment camps, forced sterilizations, and the concerted destruction of Uyghur heritage,” the joint statement said. All five countries had taken action alongside the EU, the statement said. The US designated Wang Junzheng, the Secretary of the Party Committee of the Xinjiang Production and Construction Corps, and Chen Mingguo, Director of the Xinjiang Public Security Bureau. “These individuals are designated pursuant to Executive Order (E.O.) 13818, which builds upon and implements the Global Magnitsky Human Rights Accountability Act and targets perpetrators of serious human rights abuse and corruption,” the Treasury Department said. “Chinese authorities will continue to face consequences as long as atrocities occur in Xinjiang,” said the Treasury Department’s Director of the Office of Foreign Assets Control Andrea M. Gacki. “Treasury is committed to promoting accountability for the Chinese government’s human rights abuses, including arbitrary detention and torture, against Uyghurs and other ethnic minorities.” Blinken described the Chinese campaign against Uyghurs as genocide. “Amid growing international condemnation, the PRC continues to commit genocide and crimes against humanity in Xinjiang,” Blinken said in a statement, using the acronym for the People’s Republic of China. “The United States reiterates its calls on the PRC to bring an end to the repression of Uyghurs, who are predominantly Muslim, and members of other ethnic and religious minority groups in Xinjiang, including by releasing all those arbitrarily held in internment camps and detention facilities.” The coordinated sanctions announcement comes days after a heated clash between Blinken, National Security Adviser Jake Sullivan and senior Chinese officials prompted by US objections to Beijing’s human rights abuses, its territorial aggression and coercive economic practices. Blinken emphasized last week that the US was also expressing the concerns of allies, and indicated that going forward, Washington would act in concert with them as well, an approach that US officials say is more effective than targeting China one-on-one. On Monday, he said that the US had “taken this action today in solidarity with our partners in the United Kingdom, Canada, and the European Union … These actions demonstrate our ongoing commitment to working multilaterally to advance respect for human rights and shining a light on those in the PRC government and CCP responsible for these atrocities.” The Treasury Department said in a statement that, “complementary actions using these global human rights sanctions regimes enable likeminded partners to form a unified front to identify, promote accountability for, and disrupt access to the international financial system by those who abuse human rights.” Also Monday, The US announced a second set of coordinated sanctions with the European Union, designating to sanction Myanmar military officials and two military units for its violent repression of democratic protests there. And in a dramatic display of international solidarity against repressive Chinese practices, diplomats from more than two dozen countries gathered Monday to try to gain access to a Chinese court Monday as detained Canadian Michael Kovrig went on trial in Beijing on espionage charges. They were denied. Politico was first to report the US was set to unveil sanctions. The European Union announced its sanctions Monday, naming Zhu Hailun, former head of the Xinjiang Uyghur Autonomous Region (XUAR), and three other top officials, for overseeing the detention and indoctrination program targeting Uyghurs and other Muslim ethnic minorities in Xinjiang, they said, according to the Official Journal of the European Union. China responded almost immediately with tit-for-tat penalties, announcing sanctions on Monday against 10 EU politicians and four entities for “maliciously spreading lies and disinformation.” They will be banned from entering mainland China, Hong Kong and Macau, while their related companies and institutions are restricted from doing business with China, it said. David Sassoli, president of the European Parliament, said Monday that China’s sanctions on MEPs, the Human Rights Subcommittee and EU bodies are “unacceptable and will have consequences.” The European Commission Vice President for Foreign Affairs and Foreign Affairs High Representative Josep Borrell said Monday that China’s retaliatory sanctions against EU officials are “regrettable and unacceptable.” “Rather than change its policies and address our legitimate concerns, China has again turned a blind eye, and these measures are regrettable and unacceptable,” he said during a news conference in Brussels on Monday. Borrell pointed to China’s sanctions against “members of the European Parliament, scholars, and entities with a political and security committee, the Subcommittee on human rights, as well as to national foundations” and said, “This is something that we consider unacceptable, it doesn’t answer our legitimate concerns.” Borrell went on to reiterate that China’s action will not change “the European Union’s determination to defend human rights and to respond to serious violations and abuses,” calling on Beijing to engage in dialogue on Human Right’s issues, instead of continuing to be “confrontational.” “Human rights are inalienable rights,” Sassoli said. The EU said that Zhu Hailun had been described as the “architect” of this Uyghur indoctrination program, and “is therefore responsible for serious human rights violations in China, in particular large-scale arbitrary detentions inflicted upon Uyghurs and people from other Muslim ethnic minorities.” The sanctions marked the first time the EU has targeted China with its Human Rights sanctioning regime, which came into force in December 2020 and was first used over the poisoning of Alexei Navalny. In a statement posted by the Ministry of Foreign Affairs, China accused the EU of “disregarding and distorting the facts” and “grossly interfering in China’s internal affairs” by imposing sanctions against its officials. The Chinese individuals listed by the EU are now subject to an asset freeze and will be banned from travelling to the EU. The sanctions also bar any EU persons and entities from making funds available, either directly or indirectly, to those listed. The EU said Zhu Hailun was “responsible for maintaining internal security and law enforcement in the XUAR. As such, he held a key political position in charge of overseeing and implementing a large-scale surveillance, detention and indoctrination program targeting Uyghurs and people from other Muslim ethnic minorities.” Zhu is the former secretary of the Political and Legal Affairs Committee of the Xinjiang Uyghur Autonomous Region (XUAR), former Deputy Secretary of the XUAR Party Committee, and former Deputy Head of the regional legislative body, according to the Official Journal of the European Union. Three other Xinjiang officials were sanctioned: Wang; Deputy Secretary of the XUAR Party Committee, Wang Mingshan; and Chen Mingguo, Director of the Xinjiang Public Security Bureau. Apart from the European 10 politicians, China also sanctioned four entities included the Political and Security Committee of the Council of the European Union, Subcommittee on Human Rights of the European Parliament, the Mercator Institute for China Studies, and the Alliance of Democracies Foundation. “The Chinese government is firmly determined to safeguard national sovereignty, security and development interests,” the statement added. “The Chinese side urges the EU side to reflect on itself, face squarely the severity of its mistake and redress it. It must stop lecturing others on human rights and interfering in their internal affairs.”
3
Robert D. Putnam on Our Civic Life in Decline
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
Java JDK 17 will remove Applet API
Summary Deprecate the Applet API for removal. It is essentially irrelevant since all web-browser vendors have either removed support for Java browser plug-ins or announced plans to do so. History The Applet API was previously deprecated, though not for removal, by JEP 289 in Java 9. Description Deprecate, for removal, these classes and interfaces of the standard Java API: java.applet.Applet java.applet.AppletStub java.applet.AppletContext java.applet.AudioClip javax.swing.JApplet java.beans.AppletInitializer Deprecate, for removal, any API elements that reference the above classes and interfaces, including methods and fields in: java.beans.Beans javax.swing.RepaintManager javax.naming.Context Testing Hundreds of tests need to be either modified or removed before the Applet API is removed, but this JEP is solely about deprecation-for-removal. We will review these tests to determine if further @SuppressWarnings annotations are required. Risks and Assumptions In case remaining uses of these APIs do exist, developers can suppress compiler warnings via the @SuppressWarnings("removal") annotation or the -Xlint:-removal command-line option of the javac compiler.
1
A Deep Dive into the NELK/Wizza Giveaway – Roobet House of Cards Pt.4
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
305
Bridge collapse in Pittsburgh’s Frick Park
At least 10 people were injured Friday when a bridge carrying Forbes Avenue over Fern Hollow Creek in Pittsburgh’s Frick Park collapsed, sending a Port Authority bus and several other vehicles into a ravine below, authorities said. Three of those injured were taken to UPMC Presbyterian by ambulance, authorities said. None of the injuries appeared to be life-threatening. A fourth adult was later treated and released at UPMC Shadyside. They were all in fair condition, according to a statement from the health system. “We were fortunate,” Pittsburgh Mayor Ed Gainey said at the scene of the collapse, noting there were no fatalities. Search and rescue crews and specially trained dogs were brought in as a precaution to search voids within the rubble to make sure no one was trapped underneath, said fire Chief Darryl Jones. “Fortunately, no one was found,” he said late Friday afternoon. The bridge is between South Braddock and South Dallas avenues and is a major artery into Squirrel Hill from Regent Square and Wilkinsburg. It was last inspected in September, Gainey said. A large, articulated Port Authority bus was among the vehicles involved, with the vehicle seen trapped on a section of the collapsed bridge. The bus, Route 61B Braddock-Swissvale, was headed outbound on the bridge and was nearly at its east side when it began to collapse. “As I was driving across, in my mind, I knew the bridge was collapsing,” the Port Authority bus driver Daryl Luciani told Tribune-Review news partner WPXI. ” I could just feel it. The bus was bouncing and shaking. It seemed long, but it was probably less than a minute. The bus finally came to a stop.” Jones estimated that rescue crews had to rappel 100 to 150 feet down the ravine to rescue motorists. “They also did like a daisy chain with just hands, grabbing people and pulling them up,” Jones said. “They came down the hill with the flashlights, like I said it was still dark,” he told WPXI. “They brought a rope over to our door, they tied it to the railing. It was icy. And we were able to get the passengers off and myself.” A ruptured gas line sent the smell of natural gas spewing into the community, a mix of homes and businesses. Crews shut off service to the line, Jones said. The fire chief said authorities evacuated several families who live near the leak, but they were allowed to return by 9 a.m. He said gas service has been restored to all customers affected. The Forbes Ave bridge over #frickpark in #pittsburgh collapsed at about 6am. Several vehicles and a bus on the bridge. No injuries reported yet. Strong smell of natural gas. Avoid the area #pittsburghbridgecollapse pic.twitter.com/ykkE4YjiiX — Greg Barnhisel (@gbarnhisel) January 28, 2022 Police told drivers to avoid the area. PennDOT closed the offramp from eastbound Interstate 376 to Edgewood (Exit 77). Rescue operations ended at 8:30 a.m., officials said. Howard Seltman, 67, who lives nearby on Briarcliff Road, called 911 after he heard the collapse. Dispatchers told him he was not the first caller. “That bridge carries a lot of traffic every day,” he said from a trail below the bridge in Frick Park. “How does this happen?” He said he heard a loud sound, followed by a rushing sound. The rushing sound got louder and Seltman immediately smelled gas. “I never could have imagined that it would be that bridge,” he said. Wendy Stroh, who lives on South Braddock, said she heard a noise that sounded like “a huge snow plow.” “Just the thought of the bridge collapsing is a very scary prospect,” Stroh said. “I cross that bridge all the time.” Wendy Stroh, who lives on S Braddock, said she woke up to a terrible noise that might have been the gas line rupturing. pic.twitter.com/ngYKbapDqj — Megan Guza (@meganguzaTrib) January 28, 2022 Mary Withrow, of Biddle Street, also described hearing a “horrible noise.” “It was so loud you just wanted it to stop,” she said. She said she has lived in the area her entire life and recalled playing under the bridge as a child. “I just burst into tears, thinking about all the people that travel the bridge every day — my friends, my family, myself,” she said. • Biden visit to Pittsburgh still on following bridge collapse • Photos: Bridge collapses in PIttsburgh's Frick Park • Pennsylvania scores C- for condition of roads, bridges, infrastructure Greg Barnhisel, of Park Place, said he and his wife live about a quarter-mile from the bridge and walk in the park often . When they saw the news, they went to the scene to see it for themselves. “We were terrified for the people on the bridge and in the bus and hope everyone is OK,” he said. “It’s very concerning and we are saddened by this and how it will affect the city and our beloved Frick Park.” Barnhisel said he’s never had concerns about the bridge’s integrity before. Today a bridge collapsed near Forbes and S Braddock. I am thankful there are no reported fatalities or critical injuries at this time. Thank you @PghPublicSafety for the quick response and thank you to the county, state, and federal governments for the cooperation and assistance. pic.twitter.com/tTld6t62rn — Ed Gainey (@gainey_ed) January 28, 2022 The Catholic Diocese of Pittsburgh said St. Bede Catholic School in Point Breeze has closed because of the bridge collapse. The school had previously been on a two-hour delay because of snow and road conditions. “Many teachers and parents of St. Bede use that bridge every day,” Principal Sister Daniela Bronka said in a statement. “We are so blessed to have been on a two-hour delay. We pray for all involved.” City Controller Michael Lamb said the bridge collapse “is a reminder that investments in infrastructure are investments in public safety.” Allegheny County is home to more structurally deficient bridges than any other county in the country, Lamb said. “If we do not act, events like this will, unfortunately, continue to happen,” he said. “I’m thankful there was no loss of life this morning, and we owe a debt of gratitude to the public employees who are supporting recovery efforts at the scene.” The collapse came on the same day President Joe Biden visited Pittsburgh to talk about infrastructure. Biden stopped at the site briefly Friday afternoon.
9
Classical Statistics Has Outlived Its Usefulness: Here’s the Fix
A PDF of this article may be downloaded here. This article is a precis of A PDF of this article may be downloaded here. This article is a precis of Uncertainty. Patient walks into the doctor and says, “Doc, I saw that new ad. The one were the people with nice teeth and loose clothing are cavorting. I want to cavort. Ad said, ‘Ask your doctor about profitol.’ So I’m asking. Is it right for me? Will it clear up my condition?” The doctor answers: “Well, in a clinical trial it was shown that if you repeated that clinical trial an infinite number of times, each time the same but randomly different, and if each time you calculated a mathematical thing called a z-statistic, which assumes profitol doesn’t work, there was a four percent chance that a z-statistic in one of those repetitions would be larger in absolute value than the z-statistic they actually saw. That clear it up for you? Pardon the pun.” Patient: “You sound like you buy into that null hypothesis significance testing stuff. No. What I want to know, and I think it’s a reasonable request, is that if I take this drug am I going to get better? What’s the chance?” Doctor: “I see. Let me try to clarify that for you. In that trial, it was found a parameter in a probability model related to getting better versus not getting better, a parameter which is not actually the probability but a parameter like it of getting better, had a ninety-five-percent confidence interval of 1.01 to 1.14. So. Shall I write you a prescription?” Patient: “I must not be speaking English. I’m asking only one thing. What’s the chance I get better if I take profitol?” Doctor: “Ah, I see what you mean now. No. I have no idea. We don’t do those kind of numbers. I can give you a sensitivity or a specificity if you want one of those.” Patient: “Just give me the pill. My insurance will cover it.” Ladies and gentlemen: the story you have heard is true. Only the names have been changed to protect the guilty. Ordinary people ask questions like the patient’s: Supposing X is true, what is the chance that Y? Answering is often easy. If X = “This die has six different sides, which when thrown must show only one face up”, the probability of Y = “a five shows” is 1/6. Casinos make their living doing this. The professionals who practice statistics are not like ordinary people. They are puzzled when asked simple probability questions. Statisticians really will substitute those mouthfuls about infinite trials or parameters in place of answering probability questions. Then they will rest, figuring they have accomplished something. That these curious alternate answers aren’t what anybody wants never seems to bother them. Here is why this is so. We have uncertainty about some Y, like the progress of a disease, the topmost side of a die, the spin of a particle, anything. Ideally we should identify the causes of this Y, or of its absence. If we could know the cause or know of its lack, then the uncertainty we have would disappear. We would know. If the doctor could know all the causes of curing the patient, then he and the patient would know with certainty if the proposed treatment would work, or to what extent. Absent knowledge of cause there will be uncertainty, our most common state, and we must rely on probability. If we do not know all the causes of the cure of the disease, the best we can say is that if the patient takes the drug he has a certain chance of getting better. We can quantify that chance if we propose a formal probability model. Accepting this model, we can answer probability questions. We don’t provide these answers, though. What we do instead is speak entirely about the innards of the probability model. The model becomes more important than the reality about which the model speaks. In brief, we first propose a model that “connects” the X and Y probabilistically. This model will usually be parameterized; parameters being the mathematical objects that do the connecting. Statistical analysis focuses almost entirely on those parameters. Instead of speaking of the probability of Y given some X, we instead speak of the probability of objects called statistics when the value of one or more of these parameters take pre-specified values. Or we calculate values of these parameters and act as if these values were the answers we sought. This is not just confusing, it is wrong, or at least wrong-headed. Why these substitutions for simple probability questions happen is answered easily. It is because of the belief that probability exists. Probability, some say, exists in the same way an electric charge exists, or in the way the length of the dollar bill exists. Observations have or are “drawn from” “true” probability distributions. If probability really does exist, then the parameters in those parameter models also exist, or are measures of real things. This being so, it makes sense to speak of these real objects and to study them, as we might, say, study the chemical reactions that make flagellum lash. The opposite view is that probability does not exist, that it is entirely epistemological, a measure of uncertainty. Probability is a (possibly quantified) summary of the uncertainty we entertain about some Y given some evidence X. In that case, it does not make sense to speak of model parameters, except in the formal model building steps, steps we can leave to the mathematicians. These two beliefs, probability is real or in the mind, have two rough camps of followers. The one that believes probability exists flies the flag of Frequentism. The one that says it doesn’t flies the flag of Bayes. Yet most Bayesians, as they call themselves, are really frequentist sympathizers. When the data hits the code, the courage of their convictions withers and they cross to the other side and become closet frequentists. Which is fair enough, because frequentists do the same thing in reverse when discussing uncertainty in parameters. Frequentists are occult Bayesians. The result is a muddle. Let me first try to convince you probability doesn’t exist. Then I’ll explain the two largest generators of over-certainty that come from the belief in probability existing. Finally, I’ll offer the simple solution, a solution which has already been discovered and is in wide-spread use, but not by statisticians or those who use statistical models. You do not have a probability of being struck by lightning. Nobody does. There is no probability an electron will pass through the top slit of a two-slit experiment. There is no chance for snake eyes at the craps table. There is no probability a random mutation will occur on a string of DNA and turn the progeny of one species into a new species. There is no probability a wave function for some quantum system will collapse into a definite value. There isn’t any chance you have cancer. There isn’t any chance that physical (so-called) constants, such as the speed of light, took the values they do so that the universe could evolve to be observed by creatures like us. If probability existed in an ontological sense, then things would have probabilities. If a thing had probability, like you being struck by lightning, then probability would be an objective property of the thing, like a man’s height or an electron’s charge. In principle, this property could be measured, given specified circumstances, just like height or charge. Probability would have to be more than just a property, though. It either must act as a cause, or it must modify causes to the thing of interest. It would, for example have to draw lightning toward you in some circumstances and repel it in others, or it would have to modify the causes that did those things. If probability is a direct cause, it has powers, and powers can be measured, at least in principle. If probability only modifies causes, it can either be adjusted in some way, i.e. it is variable, or it is fixed. In either case, it should be easy, at least for simple systems, to identify these properties. If things have probability, what part of you, or you plus atmospheric electricity, or you plus whatever, has the probability of being struck by lightning? The whole of you, or a specific organ? If an organ, then probability would have to be at least partly biological, or it would be able to modify biology. Is it adjustable, this probability, and tunable like a radio so that you can increase or decrease its strength? Does some external cause act on this struck-by-lightning probability so that it vanishes when you walk indoors? Some hitherto hidden force would have to be responsible for this. What are the powers of this cause and by what force or forces does it operate? Is this struck-by-lightning probability stored in a different part of your body than the probabilities of cancer or of being audited by the IRS? Since there are many different things that could happen to you, each with a chance of happening, we must be swarming with probabilities. How is it that nobody has ever seen one? Here is a statement: “There are four winged frogs with magical powers in a room, one of whom is named Bob; one winged frog will walk out the door.” Given this statement, what is the probability that “Bob walks out”? If probability is in things, how is it in non-existent winged frogs? Some say that Germany would have won World War II if Hitler did not invade Russia. What is the probability this is true? If probability exists, then how could probability be in a thing that never happened? I was as I wrote this either drinking a tot of whiskey or I was not. What is the probability I was drinking that whiskey? Where does the probability live in this case, if it is real: in you, in me, in the whiskey? There is an additional problem. Since I know what I was doing, the probability for me is extreme, i.e. either 0 or 1, depending on the facts. The probability won’t be either number for you since you can’t be certain. Probability is different for both of us for the same event. And it would seem it should be different for everybody who cared to consider the question. Probability if it exists must be on a continuum of a sort, or perhaps exist as something altogether different. Yet since probability can be extreme, as it is for me in this case and is for you, too, once you learn the facts (I was not drinking), it must be, if probability is real, that the probability just “collapsed” for you. Or does it go out of existence? Well, maybe probability doesn’t exist for any of these things, but it surely must exist for quantum mechanical objects, because, as everybody knows, we calculate the probability of QM events using functions of wave functions (functions of functions!), and everybody believes wave functions are ontologically real. Yet we also calculate probabilities of dice rolls as functions of the physical properties of dice, and probability isn’t in these properties, because if we’re careful we can control outcomes of dice throws. We can know and manipulate all the causes of dice throws. We know we cannot with QM objects. Yet probability in QM isn’t the wave function, it’s a function of the wave function, and also of the circumstances of the measurement that was made (the experiment). The reason we think QM events have probability is that we cannot manipulate the circumstances of the measurement to produce with certainty stated events, like we can with dice (and many things), by carefully controlling the spin and force with which dice are thrown. Again, with dice, we can control the cause of the event, with QM we cannot. The results in QM are always uncertain; the results with dice need not be. Since Bell, we know we cannot know or control all the causes of QM events (the totality of causes). This has caused some people to say the cause of QM events doesn’t exist, yet things still happen, therefore that this non-existent cause is probability. Some will make this sound more physical by calling this strange causal-non-causal probability propensity, but given all the concerns noted above, it is easy to argue propensity is just probability by another name. Whether or not that is true, and even if these brief arguments are not sufficient to convince you probability does not exist, and accepting philosophers constantly bicker over the details, I am hoping it is clear that if in any situation we did know the cause of an event, then we would not need probability. Or, rather, conditional on this causal knowledge, probability would always be extreme (0 or 1). At the least, probability is related to the amount of ignorance we have about cause. The stronger the knowledge of cause, the closer to extreme the probability is. In any case, it is knowledge of cause which is of the greatest importance. Searching for this knowledge is, after all, the purpose of science. The main alternate view of probability is to suppose it is always a statement of evidence, that it always epistemological. Probability is about our uncertainty in things, and not about things as they are in themselves. Probability is a branch of epistemology and not ontology. Bruno de Finetti famously shouted this view, and after an English translation of his rebel yell appeared in 1974, there was an explosion of interest in Bayesian statistics, the theory which supposedly adopts this position. (See Bruno de Finetti, 1974. Theory of Probability, (translation by A Machi and AFM Smith of 1970 book), Volume 1, Wiley, New York; quote from p. x.) Everybody quotes this, for good reason (ellipsis original): The abandonment of superstitious beliefs about the existence of the Phlogiston, the Cosmic Ether, Absolute Space and Time,…or Fairies and Witches was an essential step along the road to scientific thinking. Probability, too, if regarded as something endowed with some kind of objective existence, is no less a misleading misconception, an illusory attempt to exteriorize or materialize our true probabilistic beliefs. There were others beside de Finetti, like the physicist E.T. Jaynes, economist John Maynard Keynes, and the philosopher David Stove, who all held that probability is purely epistemological. Necessary reading are Jaynes’s 2003 Probability: The Logic of Science, Cambridge University Press, Jim Franklin’s 2001 “Resurrecting logical probability”, Erkenntnis, Volume 55, Issue 2, pp 277–305, and John Maynard Keynes’s 2004 A Treatise on Probability, Dover Publications. Stove is the dark horse here, and I could only wish his The Rationality of Induction were better known. Probability is an objective classification or quantification of uncertainty in any proposition, conditional only on stated evidence. After de Finetti’s and these others’ works appeared, and for other reasons, many were ready to set aside or give less oxygen to frequentism, the practice of statistics which assumes that probability is real and only knowable “in the limit”. Room was made for something called subjective Bayesianism. Bayesianism is the idea probability is epistemic and that it can be known subjectively. Probability is therefore mind dependent. Yet if probability is wholly subjective, a bad meal may change the probability of a problem, so we have to be careful to define subjectivity. Frequentism, with its belief in the existence of probabilities, is far from dead. It is the form of and practice of statistics taught and used almost everywhere. All Bayesians start as frequentists, which might be why they never let themselves break entirely free from it. Now the philosophical position one adopts about probability has tremendous consequences. You will have heard of the replication crisis afflicting fields which rely heavily on statistics, whereby many results once thought to be novel or marvelous are now questioned or are being abandoned. (There are any number of papers on the replication crisis. A typical one is Camerer et al., Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2 (9), pp. 637–644.) Effects which were once thought to be astonishing shrink in size the closer they are examined. The crisis exists in part because of the belief probability is real. Even beside this crisis, there is massive over-certainty generated in how statistics is practiced. All probability can be written in this schema: where Y is the proposition of interest, and X all the information that is known, assumed, observed, true, or imagined to be true, information that is thought to be probative of Y. Included in X—and here is what is most forgotten in the heat of calculation—are the rules of grammar, definitions of words and symbols, mathematical or in some other language, and all other tacit knowledge about X and Y which is thought too obvious to write down. X must always be present. However useful as shorthand, it is an error to write, sans X, Pr(Y), notation which suggests that probability exists and that Y has a single unique probability that can be discovered. All probability is conditional on some X. If this is doubted, it is an interesting exercise to try and write a probability of some Y without an X. Pick any proposition Y and try to show it has a unique probability without any conditions whatsoever; i.e. that Pr(Y) for your Y exists. You will soon discover this cannot be done. Recall that all tacit premises of grammar and definitions about Y must be included as premises X. Try Y = “This proposition is false.” What is Pr(Y)? This is hours of fun for the kids. What is the probability of Y = “I have cancer”? The question is pressing and relevant. It is clear that your knowledge of whether you have cancer is not the same as whether you actually have cancer. Since the question is of enormous interest, we want to give an answer. Information that is relevant to the causes of cancer present themselves: “I’m a man, over fifty; I smoke and am too fond of donuts…” This amorphous list is comprised of what you have heard, true or not, about causes of cancers. You might reason Pr( Cancer | I Smoke ) = good chance. There are no numbers assigned. No numbers can be assigned, either; none deduced, that is. To do that we need to firm up the X to create a mathematical tie between it and Y. The real interest in any probability calculation is therefore in X: which X count for this Y. Ideally, we want to know the cause, the reason, for Y’s truth or its falsity. Once we know a thing’s cause or reason for existence, we are done. Barring this perfect state of knowledge, we’d like to get as close as we can to that perfection. Science is the search for the X factor. The choice of Y is free. This part of probability can be called subjective. Once a Y is decided upon, the search for the X that is Y’s cause, or is in some other way probative, begins. Probability can be called subjective at this step, too, for what counts as evidence can itself be uncertain. Sports are a good example. I think it’s likely the Tigers will win tomorrow, while you say it isn’t. We list and compare all our premises X, some of which we both accept, some we don’t and which are individual to each of us. If I agree with {\it all} your evidence, then I must agree with your probability. In this way, probability is not subjective. Probability is not decision, or rather it comes before decision, so that even if we agree in {\it all} X, and therefore our probabilities match, we might differ in what decisions we make conditional on this. If we agree that X = “This is a 6-sided…etc.” and that Y = “A five spot shows” then it would be absurd if you said the probability was, say, 4/6 while I said 1/6. If you did say 4/6 it must be because you have different, tacit, X than I have. Our premises do not agree. But if they did agree, then probability is no more subjective in its calculation than is algebra once the equation is set. Once the X and Y are fixed, the answers are deduced. One form of subjective probability asks a man to probe his inner feelings, which become his X, to use in calculating the probability of Y. The process is called “probability elicitation”, which makes it sound suitably scientific. And it can be, though it can lead to forced quantification, hence over-certainty. Since the choice of X is free, there is no problem per se with this practice, except that it tends to hide the X, which are the main point of any probability exercise. Subjective probability becomes strange when some require the mind to enter into the process of measurement, as some do with quantum mechanics. (Christopher A. Fuchs is among those trying to marry subjective Bayesian probability with quantum mechanics. See Caves, C.M., C.A. Fuchs, and R. Schack, 2001. Quantum probabilities as Bayesian probabilities, DOI: 10.1103/PhysRevA.65.022305.) That subject is too large for us today. In practice, there is little abuse of subjective probability in ordinary statistical problems. Mostly because there is nothing special about Bayesian probability calculus itself. It is just probability. Bayes is a useful formula for computing probabilities in certain situations, and that’s it. Bayes is supposed to “update” belief, and it can, but the formula is just a mechanism. We start with some X_1, probative about Y. We later learn X_2, and now we want Pr(Y | X_1 X_2). The Bayes formula itself isn’t strictly needed (though no one is arguing for discarding it) to get that. We always want Pr(Y | X) whatever the X is, and whenever we get it. If we call X the “updated information”, the union of X_1 and X_2, then it may be that Bayes formula provides a shortcut to the calculation, and again it may not. The real departure of Bayes from frequentism is not in the questions of the subjectivity of probability, for the same subjective choices must be made by frequentists in picking their Y and X. It’s that Bayes insists all uncertain propositions must have conditional probability in the epistemic sense, whereas in frequentism things that were said to be “fixed” do not, and are in fact forbidden by the precepts of the theory to have anything to do with probability. The real cleft is whether or not the uncertainty in parameters of probability models should be quantified with probability. Bayesians say yes, frequentists no. Just what are these parameters? A model or theory is a list of premises X which are probative of Y. If a number and not just a word for the probability is desired, some way to connect the X to the Y such that quantities can be derived must be present. This can be trivial counting as when X = “This machine must take 1 of n states”, Pr(Y = “This machine is in state j” | X) = 1/n. The deduction to 1/n can be done by calling to the symmetry of logical constants. (See David C. Stove, 1986, The Rationality of Induction, Oxford University Press. The second half of the book is a brilliant justification of probability as logic which gives this rare proof. Note this doesn’t have to be a real machine, so we don’t need notions of symmetry; rather, symmetry is deduced from the premises.). There is no looseness in X: it is specific. This is important: we must take the words as they are without addition (except of course their definitions). That is, there is no call to say “Well, some machines break”, which may be true for some machines, but it is not supposed here. Probability is always, or should always, be calculated on the exact X specified, and nothing else. More complex probability models use parameterized distributions, an ubiquitous example being the normal, the familiar bell-shaped curve. It is properly said to represent uncertainty in some observable Y. But often people will say Y is normal, as if the observable has the properties of a normal, which is another way of saying probability exists. If the probability exists, the parameters of normal distribution must also exist, and must in some way be part of the observable, or the observable plus measurement, as suggested above. The manner in which this might be true is, of course, never really specified. It’s simple enough to show that this can’t be true, or at least that it can’t be known to be true. Our X in this case might be “The uncertainty in Y is represented by a normal with parameters 10 and 5”. Any Y will do, such as Y = “The measurement is 7.” We can calculate Pr(Y | X), which is equal to 0. And is equal to 0 for any singular or point measurement. Which is to say, given X, the probability of any point measurement is 0. This happens because, as is well known, the normal gives all its probability to the continuum and none to actual measurements. There is no difficulty in the math, but this situation presents a twist to understanding probability. Any measurement we might take is finite and discrete; no instrument can probe reality to an infinite level, nor can we store infinite information. There is therefore no set of actual measurements that can ever conclusively prove an observable is, or is from, a normal distribution. The counter to this is to appeal to the central limit theorem and say collections of measurements of observables are comprised, or are made by, many small causes. In the limit, it is therefore the case the observable is a normal. This proof is circular. There is no justification for assigning the normal distribution in the first place to an observable because we don’t know it will always and forevermore be created by these small additive causes. The only time we can know we have a normal is when we are at that happy limit where we know all there is to know about the observable. At which point we will no longer need probability. Beside all that, there is no proof, and every argument against, anything lasting forever. This real-world finiteness is granted, but it still claimed y is normal, with the feeling—it is no more than that—that the normal is what in part generates or causes y. This is a strange view is never really fleshed out. These same objections about finiteness apply to any probability model of measured observables that are not deduced and which are merely applied ad hoc. Most models are in fact ad hoc. Of course, the normal, and many similar models, are often employed in situations where the measurements are known to be finite and discrete. These models can be and are useful, as long as we are happy with speaking of probabilities of intervals and not singular points, and we’re aware at all times the models are approximations. Suppose in fact we are willing to characterize our uncertainty in some observable y with a normal distribution, the familiar bell-shaped curve, with parameters 0 and 1 (the lower case y is shorthand to things like Y = “y > 0″). These parameters specify the center of the bell and give its spread. These suppositions about the model and parameters are our X. It’s then easy to calculate things like Pr(y > 0 | X) = 0.5. It’s also clear that we have perfect certainty in the parameters: they were given to us. It would be an obvious error to say that the uncertainty we have in these parameters, which is none, is the same as the uncertainty we have in the observable, which is something. Yet this mistake is made in practice. To see how let’s expand our model. We have the same observable and also a normal, only this time we don’t know what values the parameters (mu, sigma) take. We want to know Pr( y in s | X)$ for some set s, where X is the assumption of the normal. Since the parameter mu can take any value on the real line, and sigma$ any value on the non-negative part of the real line, there is no way to calculate this probability of y in s. Something first has to be said about the parameters. Above we dictated them, which is a form of Bayesianism. They may have even been deduced from other premises, such as symmetry in some applications. That deduction, too, is Bayes. These additional premises fall into X, and the calculation proceeds. Any premises relevant to the parameters can be used. When these premises put probabilities on the parameters the premises are called “priors”; i.e. what we know or assume about the parameters before any other information is included. A common set of prior premises is to suppose that mu ~ N(nu, tau), another normal distribution where the “hyper-parameters” (nu, tau) are assumed known by fiat, and that sigma ~ IG(alpha, beta), an inverse gamma (the form is not important to us), and again where the hyper-parameters (alpha, beta) are known (or specified). A frequentist does not brook with any of this, insisting that once the probability model for y is specified, the parameters (mu,sigma) come into existence, or they always existed, only we just now became aware of them (though not of their value). These parameters must exist since probability exists. The parameters have “true” values, and it is a form of mathematical blasphemy to assign probability to their uncertainty. The frequentist is then stuck. If he isn’t handed the true values by some oracle, he cannot say anything about Pr( y in s | X)$, where for him X is only evidence that he uses the normal. The Bayesian can calculate Pr( y in s | X_b), the subscript denoting the different evidence than that assumed by the frequentist, X_f. The values of (nu, tau) and (alpha, beta) are first spoken, which gives the probabilities of the parameters. The uncertainty in these parameters is then integrated out using Bayes’s formula, which produces Pr( y in s | X_b), which in this case has the form of a t-distribution, the parameters of which are functions of the hyper-parameters. The math is fun, but beside the point. The frequentist objects that if the priors were changed, the probability of y in s will (likely) change. This seems a terrible and definitive objection to him. The criticism amounts to this, in symbolic form: Pr( y in s | X) ≠ Pr( y in s | W)$, where X ≠ W. This objection pushes on an open door. Since probability changes when the probative premises change, of course the probability changes when the priors change. But to the frequentist, probability exists and has true values. These priors might not give the true values, since they are arbitrary. Even granting that objection, the frequentist forgets the normal model in the first place was also arbitrary, a choice between hundreds of other models. It too might not give the true probability. We’d be stuck here, except that the frequentist allows that previous observations of y are able to give some kind of information about the parameters. The Bayesian says so too, but the kind of information for him is different. The frequentist will use previous observations to calculate an “estimate” of the parameters. In the normal model, for the mu it is usually the mean of the previous y; for the sigma it is usually the standard deviation of those y. (So common are these estimates, that it has become customary to call the parameters the mean and standard deviation; however this is strictly a mistake.) The word estimate implies a true value exists, because probability exists. The frequentist admits there is uncertainty in the guesses, and constructs a “confidence interval” around each guess. Here is the definition of a 95% confidence interval: if you repeated the experiment, or suite of measurements, or the set of circumstances that gave rise to the observed y, an infinite number of times, each time exactly the same but randomly different, and each time calculating the estimate and the confidence interval for the estimate, then in that infinite set of confidence intervals 95% of them will “cover” the true value of the parameter. What of this confidence interval? The only thing that can be said is that either the true value of the parameter is in it, or it isn’t. Which is a tautology and always true, and therefore useless. No frequentist ever in practice uses the official definition of the confidence interval, proving that no frequentist has any real confidence in frequentist theory. Every frequentist instead interprets the confidence interval as a Bayesian would, as giving the chance the true value of the parameter is in this interval. The Bayesian calculates his interval, called a “credible interval”, in a slightly different way than the frequentist, using the priors and Bayes theorem. In the end, and in many homely problems, the intervals of the frequentist and Bayesian are the same, or close to the same. Even when the intervals are not close, there is a well known proof that shows that as the number of the observations increases, the effects of the prior on the interval vanish. So, given these at least rough agreements, what’s the point of mentioning these philosophical quibbles which excite statisticians but have probably bored the reader? There are two excellent reasons to bend your ear. The first is that, just as frequentists became occult Bayesians in interpreting their results, the Bayesians became cryptic frequentists when interpreting theirs! That the Bayesians also speak of a “true” value of their parameter also means they don’t take their theory seriously, either. Even this wouldn’t be a problem, except for a glaring omission that seems to have escape everybody’s attention. This is the second reason to pay attention. We started by asking for Pr( y in s | X). The frequentist supplied X_f and the Bayesian X_b. Both calculated intervals around estimates of mu. Both then stopped. What Pr( y in s | X_f) or Pr( y in s | X_b) is we never learn. The parameters have become everything. Actually, only one parameter: the second, sigma, is forgotten entirely. This parameter-centric focus of both frequentists and Bayesians has led to many difficulties. The first is “testing”. The patient we met at the beginning had Y = “I am cured” and X = “I take profitol” and wanted Pr( Y | X)$. The doctor instead first told him the results of a statistical test. The simplest such test works like this. Cures and failures when taking profitol or a placebo happen. Since we don’t know all the causes of cures or failures, it is uncertain whether taking the drug will cause a cure. The uncertainty in the cause is quantified with a probability model, or in this case two probability models. One has a parameter related to the probability of a cure for the drug, and the second a parameter related to the probability of a cure for the placebo. These parameters are often called the probabilities of a cure, but they are not; if they were, we would know Pr( Cure | Drug) and Pr( Cure | Placebo), and we’d pick whichever is higher. The test begins with the notion that the probabilities are unknown and must be estimated. But we never want to estimate the probability (except in a numerical approximation sense): we want Pr( Cure | X) period, where X is everything we are assuming. X includes past observations, the model assumptions, and which pill is being swallowed. The problem here is the overloading of the word probability: in the test it stands for a parameter, and it also stands for the actual conditional probability of a cure. Confusion arises through this double meaning. In other words, what we should be doing is calculating Pr( Cure | Drug(X))$ and Pr( Cure | Placebo(X)). But we do not. Instead we calculate a statistic, which is a function of the estimates of the two parameters. There are many possible non-unique choices of this statistic, with each giving a different answer to the test. One statistic is the z-statistic. To calculate its probability, it is assumed the two parameters are equal. Not just here in the past observations, but everywhere, for all possible observations. If probability exists, these parameters exist, and if they exist they might be equal. Indeed, they are said to be equal. With these assumptions, the probability of seeing a z-statistic larger in absolute value than the one we actually saw is calculated. This is the p-value. Footnote: I have a collection of anti-p-value arguments in “Everything Wrong With P-Values Under One Roof”, 2019, In Beyond Traditional Probabilistic Methods in Economics, V Kreinovich, NN Thach, ND Trung, DV Thanh (eds.), Springer, pp 22–-44. The use of p-values is changing. Even so staid an organization as the American Associations of Statisticians has begun issuing warnings that probability-as-real measures like p-values should not be relied upon. See Wasserstein, R.L. & Nicole A. Lazar, 2016. The ASA’s statement on p-values: context, process,and purpose. The American Statistician, DOI: 10.1080/00031305.2016.1154108. Not to be missed is the 2019 Nature article “Scientists rise up against statistical significance” by Valentin Amrhein, Sander Greenland, Blake McShane, which relates how over 800 scientists (I am one of them) signed a statement asking for the retirement of the phrase “statistically significant”. If the p-value is smaller than the magic number, which everybody knows and which I do not have to repeat, it is decided that the two parameters representing probability of cure are different, and by extension, the probability of cures are different. The Bayesian sees one weakness with this: the test puts things backwards. It begins by assuming what we want to know, and uses some odd decision process to confirm or disconfirm the assumption. We do not know the probability the two parameters are unequal, or that one is higher than another, say. The Bayesian might instead calculate Pr( theta_d > theta_p | X)$ (the notation being obvious). This doesn’t have to be a strict greater-than, and can be any function of the parameters that fits in with whatever decisions are to be made. For instance, sometimes instead of this probability, something called a Bayes factor is calculated. The idea of expressing uncertainty in the parameters with probability is the same. The innovation of these parameter posteriors (for that is their name) over testing is two-fold. First, it does not make a one-size-fits-all decision like p-values and declare with finality that parameters are or aren’t different, or, in the bizarre falsification language of testing, that it hasn’t been disproved they are the same. Second, it puts a quantification on the probability on a potential question of interest; i.e. whether the parameters really are different. The Bayesian has stopped short: these posteriors do not answer the question of interest. We wanted the probabilities of cures, not whether some dumb parameters were different. Why not just forget all this testing and focus on parameters and calculate these probabilities? Alas, no. Instead of testing, what might be calculated is something called a risk ratio, or perhaps an odds ratio. The true “risk” of a cure here is Pr( Cure | Drug(X) / Pr( Cure | Placebo(X)). This is fine one-number summary of the two probabilities, albeit with a small loss of information. The model-based risk ratio is not this, however, and is instead a ratio of the parameters, which again are not the probability but which are called probabilities. Since the probabilities are never calculated, and instead estimates of the parameters are, an estimate of the model risk ratio is given, along with its confidence or credible interval. The big but is that this is an interval of the ratio of parameters, which exaggerates certainty. Since we can, if we wanted to, calculate the ratio of the probabilities themselves, it isn’t even needed. This simple example is multiplied indefinitely because almost all statistical practice revolves around parameter-centric testing, or parameter estimation. Parameters are not the problem, though, because they are necessary in models. Since at least because they can be never observed, and since they don’t answer probability questions about the observable, they should not be the primary focus. It is parameters or functions of parameters which are reported in almost all analyses, it is the parameters which are fed into decisions, including formal decision analysis; it is even in many cases the parameters which become predictions, and not observables. All this causes massive over-certainty, and even many errors, mainly about ascribing cause to observations. Here is a simple example of that over-certainty, using regression, that ubiquitous tool. Regression assigns a parameter beta to a supposed or suspected cause, such a sex in a model of income. The parameter in this case will represent the difference in sexes. The regression will first test whether this parameter is 0. If it is decided, via a large p-value, that this parameter is 0, then it will be announced “There is no difference in incomes between males and females.” If the p-value is instead wee, then it will be said “Males and females have different incomes.” The face over-certainty is obvious. Next comes the point estimate of the parameter. Suppose income is measured in thousands, and that the estimate of the parameter is 9. It will be announced as definitive that “Males make on average $9,000 more than females.” The confidence interval is, let’s say, (8, 10). It will be announced “There is a 95\% chance males make from between $8,000 to $10,000 more than females.” Even ignoring the misinterpretation of the confidence interval, this is still wrong. These numbers are about the parameter, and not income. The predictive probability Pr( M > F | X) will not be equal to Pr( beta > 0 | X). It depends on the problem, but experience shows the latter probability is always much larger than the former. The probability of the beta > 0 may be close to extreme, while the probability of the observables, M > F incomes, may be near 0.5, a number which expresses ignorance about which sex makes more. Again, certainty in the parameters does not translate into certainty in the observables. Worse, the 95% predictive interval in income differences will necessarily be wider than the interval of the parameter. Experience shows that for many real-life data sets, the observable predictive interval is 4-10 times wider than the parametric interval. How much over-certainty really exists in published literature has not yet been studied, but there is no sense that it is small. In the example we stared with, with the normal (0,1) model, the predictive interval is infinitely larger than the parametric, which is 0. This same critique can be applied to any probability model that is cast in its parametric and not predictive, i.e. probability, form. The reason for the parameter preference is because of the belief probability exists. Observations, it is said, are “drawn from” probability distributions, which are a feature of Nature. If we knew the true probability distribution for some observable, then we’d make optimal decisions. Again, if probability exists, parameters exist, and it is a useful shorthand to speak of parameters and save ourselves the difficulty of speaking of probabilities, which would be equivalent in a causal sense. That beta in the regression example is taken to be proving causes exist that make males earn more than females—which might be true, but it is not proven. Any number of things might have caused the income differences. If we understood what was causing each Y, then we would know the true state of nature. There is in statistical practice a sort of vague notion of cause. In the drug example, if the test is passed, some will say that the drug is better at causing cures than the placebo. Which, of course, might be true. But it cannot be proven using probability. In the set of observations we are imagining some cures were caused by the placebo; now whether this was the placebo itself or that the placebo is a proxy for any number of other causes is unimportant. The drug group we can assume saw proportionately more cures. How many of those cures in that group were caused by the placebo? All of them? None? There is no way to know, looking only at the numbers. If we look outside the numbers to other evidence, we might know whether the drug was a cure. Or sometimes a cure, since it will usually be the case the drug does not cure all patients. We consider our knowledge of chemistry, biology, and other causal knowledge. If all that tacitly becomes part of X, then we can deduce that some of the cures in the drug group were caused by the drug. But then it becomes a question of why everybody wasn’t cured, if the drug is a cause of cures. It must be that there are some other conditions, as yet identified or not assumed in X, that are different across individuals. In effect, the discussion becomes of what is blocking the drug’s causal powers. There is a prime distinction, well known, between observations that were part of a controlled environment and those which were merely observed. Some have embraced the notion that cause should be paramount in statistical analysis; a notable example is Judea Pearl in his Causality, 2000, Cambridge University Press. These changes in focus make an excellent start, and if there is any problem it is that the existence of probability is still taken for granted. Physicists understand control well. In measuring an effect in an experiment, every possible thing that is known or assumed to cause a change in the observable is controlled. Assuming all possible causes have been identified, then the cause in this experiment may be deduced. Of course, if this assumption is wrong or ignored, then it is always the case that something exterior to our knowledge was the true cause. If it is right but ignored, then who can disprove that interdimensional Martian string radiation, or whatever, wasn’t the real cause? It is thus always possible something other than the assumed cause was the true cause of any observation. It is also the case that this complete openness to external causes is silly. We end where we began. If we knew the causes of the observable Y, we do not need probability. If we do not know all the causes of Y, we are uncertain, and thus need probability. Parameter-based testing and parameter estimation are not probability of observables, but strange substitutes which cause over-certainty. The fix is simplicity itself. Instead of testing or estimating, calculate Pr(Y | X). Give the probability of a cure when taking the drug; express the probability males make more than females with a probability. Every statistical model can be cast into this predictive approach. In Bayesian statistics it is called calculating the predictive posterior distribution. Some do this, but usually only when the situation seems naturally like a forecasting problem, like in econometrics. It works for every model. Even if you haven’t been convinced by all the earlier examples for the excellency of this suggestion, think of this. When we are analyzing a set of old observations, we know all about those observations. Testing and estimation are meant to say something about the hidden properties of these observations. If these past observations are the only measurements we will ever take, then we do not need testing or estimation! If we observed males had higher mean income than females, then we are 100% certain of this (measurement error can be accounted for if it arises). It is only because we are uncertain of the values of observations not yet made known to us that we bothered with the model in the first place. That demands the predictive, or probability, approach. Computer scientists have long been on board with this solution. Just ask them about their latest genetic neural net deep big learning artificial intelligence algorithm. (Computer scientists are zealous in the opposite direction of statisticians.) These models are intensely observable-focused. Those scientists who must expose their theories to reality on a regular basis, like meteorologists, are also in on the secret. The reason meteorologists’ probability predictions improve, and why the models of say sociologists do not, is because meteorologists test their models against real life on a daily basis. I pick on sociology because they are heavy users of statistical models. They will release a model after it passes a test, which if you read the discussion sections of their papers means to them that the theory they have just proposed is true. Nobody can easily check that theory, though, since it is cast in the arcane statistical language of testing or estimation. Anybody can check if the weather forecast is right or useful. If instead the sociologist said, “If you do X, the probability of Y is 0.9,” then anybody with the capability of doing X can check for themselves how good or useful the model really is. You don’t need access to the original data, either, nor anything else used in constructing the model. You just need to do X. The transparency of casting models in terms of probabilities, i.e. in their predictive form, may be one reason why this practice hasn’t been adopted. One can be mighty and bold in theorizing, but when one is forced to bet, well, the virtue of humility is suddenly recalled. Incidentally, if you have to ask your doctor whether an advertised pill is right for you, you might want to consider finding a more knowledgeable doctor. Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here Share this: Categories: Philosophy, Statistics
2
Check out some wonderful Playdate game demos, including a low-fi Doom
Playdate, the tiny yellow handheld game console from software developer and indie game publisher Panic, is still in the works without a firm release date. But Panic had quite a few new updates to share on Wednesday regarding its growing third-party developer program, including some really interesting-looking indie games being made to run on the Game Boy-inspired gadget. One of these titles, from game developer Nic Magnier of Keen Games, is the original Doom, which Magnier inexplicably got running on the monochrome 2.7-inch display. You can even turn the Playdate’s signature physical hand crank, which is an optional control input, to fire the iconic Doom chaingun. In its Twitter thread, Panic revealed that over 250 developers around the world have physical Playdate devices as dev kits they’re working with to create new, custom software for the platform. Not all of it is even strictly game-related. Panic says developer Ryosuke Mihara is building a calligraphy tool that makes use of the hand crank to set the direction of your brush. A fair amount of the other experiences Panic shared in its thread look like really fun, bite-sized takes on classic puzzle and arcade games. We first heard about Playdate in May of last year, and since then, Panic has been relatively quiet. But the company says it’s working diligently to complete the project and ensure it has a robust library at launch. “It’s so early! There’s too much to tweet! And not all of these projects will ship! But it has been amazing to see these devs, experienced or otherwise, get up and running quickly. Remember: every Playdate is a dev kit. You can make your own games when you get yours. If you want,” Panic wrote. “Expect a BIG UPDATE in the next few months with our timeline, pre-order details, a surprise, and title reveals of all the free games you’ll get in Playdate Season One!!”
1
Md-rec: automatically record and label tracks on Sony MiniDisc
{{ message }} fijam/md-rec You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
3
A World Without Work
1. Youngstown, U.S.A. The end of work is still just a futuristic concept for most of the United States, but it is something like a moment in history for Youngstown, Ohio, one its residents can cite with precision: September 19, 1977. For much of the 20th century, Youngstown’s steel mills delivered such great prosperity that the city was a model of the American dream, boasting a median income and a homeownership rate that were among the nation’s highest. But as manufacturing shifted abroad after World War  II, Youngstown steel suffered, and on that gray September afternoon in 1977, Youngstown Sheet and Tube announced the shuttering of its Campbell Works mill. Within five years, the city lost 50,000 jobs and $1.3 billion in manufacturing wages. The effect was so severe that a term was coined to describe the fallout: regional depression. Youngstown was transformed not only by an economic disruption but also by a psychological and cultural breakdown. Depression, spousal abuse, and suicide all became much more prevalent; the caseload of the area’s mental-health center tripled within a decade. The city built four prisons in the mid-1990s—a rare growth industry. One of the few downtown construction projects of that period was a museum dedicated to the defunct steel industry. This winter, I traveled to Ohio to consider what would happen if technology permanently replaced a great deal of human work. I wasn’t seeking a tour of our automated future. I went because Youngstown has become a national metaphor for the decline of labor, a place where the middle class of the 20th century has become a museum exhibit. “Youngstown’s story is America’s story, because it shows that when jobs go away, the cultural cohesion of a place is destroyed,” says John Russo, a professor of labor studies at Youngstown State University. “The cultural breakdown matters even more than the economic breakdown.” In the past few years, even as the United States has pulled itself partway out of the jobs hole created by the Great Recession, some economists and technologists have warned that the economy is near a tipping point. When they peer deeply into labor-market data, they see troubling signs, masked for now by a cyclical recovery. And when they look up from their spreadsheets, they see automation high and low—robots in the operating room and behind the fast-food counter. They imagine self-driving cars snaking through the streets and Amazon drones dotting the sky, replacing millions of drivers, warehouse stockers, and retail workers. They observe that the capabilities of machines—already formidable—continue to expand exponentially, while our own remain the same. And they wonder: Is any job truly safe? Futurists and science-fiction writers have at times looked forward to machines’ workplace takeover with a kind of giddy excitement, imagining the banishment of drudgery and its replacement by expansive leisure and almost limitless personal freedom. And make no mistake: if the capabilities of computers continue to multiply while the price of computing continues to decline, that will mean a great many of life’s necessities and luxuries will become ever cheaper, and it will mean great wealth—at least when aggregated up to the level of the national economy. But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen. If John Russo is right, then saving work is more important than saving any particular job. Industriousness has served as America’s unofficial religion since its founding. The sanctity and preeminence of work lie at the heart of the country’s politics, economics, and social interactions. What might happen if work goes away? The U.S. labor force has been shaped by millennia of technological progress. Agricultural technology birthed the farming industry, the industrial revolution moved people into factories, and then globalization and automation moved them back out, giving rise to a nation of services. But throughout these reshufflings, the total number of jobs has always increased. What may be looming is something different: an era of technological unemployment, in which computer scientists and software engineers essentially invent us out of work, and the total number of jobs declines steadily and permanently. This fear is not new. The hope that machines might free us from toil has always been intertwined with the fear that they will rob us of our agency. In the midst of the Great Depression, the economist John Maynard Keynes forecast that technological progress might allow a 15-hour workweek, and abundant leisure, by 2030. But around the same time, President Herbert Hoover received a letter warning that industrial technology was a “Frankenstein monster” that threatened to upend manufacturing, “devouring our civilization.” (The letter came from the mayor of Palo Alto, of all places.) In 1962, President John F. Kennedy said, “If men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.” But two years later, a committee of scientists and social activists sent an open letter to President Lyndon B. Johnson arguing that “the cybernation revolution” would create “a separate nation of the poor, the unskilled, the jobless,” who would be unable either to find work or to afford life’s necessities. Adam Levey The job market defied doomsayers in those earlier times, and according to the most frequently reported jobs numbers, it has so far done the same in our own time. Unemployment is currently just over 5 percent, and 2014 was this century’s best year for job growth. One could be forgiven for saying that recent predictions about technological job displacement are merely forming the latest chapter in a long story called The Boys Who Cried Robot—one in which the robot, unlike the wolf, never arrives in the end. The end-of-work argument has often been dismissed as the “Luddite fallacy,” an allusion to the 19th-century British brutes who smashed textile-making machines at the dawn of the industrial revolution, fearing the machines would put hand-weavers out of work. But some of the most sober economists are beginning to worry that the Luddites weren’t wrong, just premature. When former Treasury Secretary Lawrence Summers was an MIT undergraduate in the early 1970s, many economists disdained “the stupid people [who] thought that automation was going to make all the jobs go away,” he said at the National Bureau of Economic Research Summer Institute in July 2013. “Until a few years ago, I didn’t think this was a very complicated subject: the Luddites were wrong, and the believers in technology and technological progress were right. I’m not so completely certain now.” 2. Reasons to Cry Robot What does the “end of work” mean, exactly? It does not mean the imminence of total unemployment, nor is the United States remotely likely to face, say, 30 or 50 percent unemployment within the next decade. Rather, technology could exert a slow but continual downward pressure on the value and availability of work—that is, on wages and on the share of prime-age workers with full-time jobs. Eventually, by degrees, that could create a new normal, where the expectation that work will be a central feature of adult life dissipates for a significant portion of society. After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology. • Labor’s losses. One of the first things we might expect to see in a period of technological displacement is the diminishment of human labor as a driver of economic growth. In fact, signs that this is happening have been present for quite some time. The share of U.S. economic output that’s paid out in wages fell steadily in the 1980s, reversed some of its losses in the ’90s, and then continued falling after 2000, accelerating during the Great Recession. It now stands at its lowest level since the government started keeping track in the mid‑20th century. A number of theories have been advanced to explain this phenomenon, including globalization and its accompanying loss of bargaining power for some workers. But Loukas Karabarbounis and Brent Neiman, economists at the University of Chicago, have estimated that almost half of the decline is the result of businesses’ replacing workers with computers and software. In 1964, the nation’s most valuable company, AT&T, was worth $267 billion in today’s dollars and employed 758,611 people. Today’s telecommunications giant, Google, is worth $370 billion but has only about 55,000 employees—less than a tenth the size of AT&T’s workforce in its heyday. • The spread of nonworking men and underemployed youth. The share of prime-age Americans (25 to 54 years old) who are working has been trending down since 2000. Among men, the decline began even earlier: the share of prime-age men who are neither working nor looking for work has doubled since the late 1970s, and has increased as much throughout the recovery as it did during the Great Recession itself. All in all, about one in six prime-age men today are either unemployed or out of the workforce altogether. This is what the economist Tyler Cowen calls “the key statistic” for understanding the spreading rot in the American workforce. Conventional wisdom has long held that under normal economic conditions, men in this age group—at the peak of their abilities and less likely than women to be primary caregivers for children—should almost all be working. Yet fewer and fewer are. Economists cannot say for certain why men are turning away from work, but one explanation is that technological change has helped eliminate the jobs for which many are best suited. Since 2000, the number of manufacturing jobs has fallen by almost 5 million, or about 30 percent. Young people just coming onto the job market are also struggling—and by many measures have been for years. Six years into the recovery, the share of recent college grads who are “underemployed” (in jobs that historically haven’t required a degree) is still higher than it was in 2007—or, for that matter, 2000. And the supply of these “non-college jobs” is shifting away from high-paying occupations, such as electrician, toward low-wage service jobs, such as waiter. More people are pursuing higher education, but the real wages of recent college graduates have fallen by 7.7 percent since 2000. In the biggest picture, the job market appears to be requiring more and more preparation for a lower and lower starting wage. The distorting effect of the Great Recession should make us cautious about overinterpreting these trends, but most began before the recession, and they do not seem to speak encouragingly about the future of work. • The shrewdness of software. One common objection to the idea that technology will permanently displace huge numbers of workers is that new gadgets, like self-checkout kiosks at drugstores, have failed to fully displace their human counterparts, like cashiers. But employers typically take years to embrace new machines at the expense of workers. The robotics revolution began in factories in the 1960s and ’70s, but manufacturing employment kept rising until 1980, and then collapsed during the subsequent recessions. Likewise, “the personal computer existed in the ’80s,” says Henry Siu, an economist at the University of British Columbia, “but you don’t see any effect on office and administrative-support jobs until the 1990s, and then suddenly, in the last recession, it’s huge. So today you’ve got checkout screens and the promise of driverless cars, flying drones, and little warehouse robots. We know that these tasks can be done by machines rather than people. But we may not see the effect until the next recession, or the recession after that.” Some observers say our humanity is a moat that machines cannot cross. They believe people’s capacity for compassion, deep understanding, and creativity are inimitable. But as Erik Brynjolfsson and Andrew McAfee have argued in their book The Second Machine Age, computers are so dexterous that predicting their application 10 years from now is almost impossible. Who could have guessed in 2005, two years before the iPhone was released, that smartphones would threaten hotel jobs within the decade, by helping homeowners rent out their apartments and houses to strangers on Airbnb? Or that the company behind the most popular search engine would design a self-driving car that could soon threaten driving, the most common job occupation among American men? In 2013, Oxford University researchers forecast that machines might be able to perform half of all U.S. jobs in the next two decades. The projection was audacious, but in at least a few cases, it probably didn’t go far enough. For example, the authors named psychologist as one of the occupations least likely to be “computerisable.” But some research suggests that people are more honest in therapy sessions when they believe they are confessing their troubles to a computer, because a machine can’t pass moral judgment. Google and WebMD already may be answering questions once reserved for one’s therapist. This doesn’t prove that psychologists are going the way of the textile worker. Rather, it shows how easily computers can encroach on areas previously considered “for humans only.” After 300 years of breathtaking innovation, people aren’t massively unemployed or indentured by machines. But to suggest how this could change, some economists have pointed to the defunct career of the second-most-important species in U.S. economic history: the horse. For many centuries, people created technologies that made the horse more productive and more valuable—like plows for agriculture and swords for battle. One might have assumed that the continuing advance of complementary technologies would make the animal ever more essential to farming and fighting, historically perhaps the two most consequential human activities. Instead came inventions that made the horse obsolete—the tractor, the car, and the tank. After tractors rolled onto American farms in the early 20th century, the population of horses and mules began to decline steeply, falling nearly 50 percent by the 1930s and 90 percent by the 1950s. Humans can do much more than trot, carry, and pull. But the skills required in most offices hardly elicit our full range of intelligence. Most jobs are still boring, repetitive, and easily learned. The most-common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Together, these four jobs employ 15.4 million people—nearly 10 percent of the labor force, or more workers than there are in Texas and Massachusetts combined. Each is highly susceptible to automation, according to the Oxford study. Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, “Sooner or later, we will run out of jobs.” Is that certain—or certainly imminent? No. The signs so far are murky and suggestive. The most fundamental and wrenching job restructurings and contractions tend to happen during recessions: we’ll know more after the next couple of downturns. But the possibility seems significant enough—and the consequences disruptive enough—that we owe it to ourselves to start thinking about what society could look like without universal work, in an effort to begin nudging it toward the better outcomes and away from the worse ones. To paraphrase the science-fiction novelist William Gibson, there are, perhaps, fragments of the post-work future distributed throughout the present. I see three overlapping possibilities as formal employment opportunities decline. Some people displaced from the formal workforce will devote their freedom to simple leisure; some will seek to build productive communities outside the workplace; and others will fight, passionately and in many cases fruitlessly, to reclaim their productivity by piecing together jobs in an informal economy. These are futures of consumption, communal creativity, and contingency. In any combination, it is almost certain that the country would have to embrace a radical new role for government. 3. Consumption: The Paradox of Leisure Work is really three things, says Peter Frase, the author of Four Futures, a forthcoming book about how automation will change America: the means by which the economy produces goods, the means by which people earn income, and an activity that lends meaning or purpose to many people’s lives. “We tend to conflate these things,” he told me, “because today we need to pay people to keep the lights on, so to speak. But in a future of abundance, you wouldn’t, and we ought to think about ways to make it easier and better to not be employed.” Frase belongs to a small group of writers, academics, and economists—they have been called “post-workists”—who welcome, even root for, the end of labor. American society has “an irrational belief in work for work’s sake,” says Benjamin Hunnicutt, another post-workist and a historian at the University of Iowa, even though most jobs aren’t so uplifting. A 2014 Gallup report of worker satisfaction found that as many as 70 percent of Americans don’t feel engaged by their current job. Hunnicutt told me that if a cashier’s work were a video game—grab an item, find the bar code, scan it, slide the item onward, and repeat—critics of video games might call it mindless. But when it’s a job, politicians praise its intrinsic dignity. “Purpose, meaning, identity, fulfillment, creativity, autonomy—all these things that positive psychology has shown us to be necessary for well-being are absent in the average job,” he said. The post-workists are certainly right about some important things. Paid labor does not always map to social good. Raising children and caring for the sick is essential work, and these jobs are compensated poorly or not at all. In a post-work society, Hunnicutt said, people might spend more time caring for their families and neighbors; pride could come from our relationships rather than from our careers. The post-work proponents acknowledge that, even in the best post-work scenarios, pride and jealousy will persevere, because reputation will always be scarce, even in an economy of abundance. But with the right government provisions, they believe, the end of wage labor will allow for a golden age of well-being. Hunnicutt said he thinks colleges could reemerge as cultural centers rather than job-prep institutions. The word school, he pointed out, comes from skholē, the Greek word for “leisure.” “We used to teach people to be free,” he said. “Now we teach them to work.” Hunnicutt’s vision rests on certain assumptions about taxation and redistribution that might not be congenial to many Americans today. But even leaving that aside for the moment, this vision is problematic: it doesn’t resemble the world as it is currently experienced by most jobless people. By and large, the jobless don’t spend their downtime socializing with friends or taking up new hobbies. Instead, they watch TV or sleep. Time-use surveys show that jobless prime-age people dedicate some of the time once spent working to cleaning and childcare. But men in particular devote most of their free time to leisure, the lion’s share of which is spent watching television, browsing the Internet, and sleeping. Retired seniors watch about 50 hours of television a week, according to Nielsen. That means they spend a majority of their lives either sleeping or sitting on the sofa looking at a flatscreen. The unemployed theoretically have the most time to socialize, and yet studies have shown that they feel the most social isolation; it is surprisingly hard to replace the camaraderie of the water cooler. Most people want to work, and are miserable when they cannot. The ills of unemployment go well beyond the loss of income; people who lose their job are more likely to suffer from mental and physical ailments. “There is a loss of status, a general malaise and demoralization, which appears somatically or psychologically or both,” says Ralph Catalano, a public-health professor at UC Berkeley. Research has shown that it is harder to recover from a long bout of joblessness than from losing a loved one or suffering a life-altering injury. The very things that help many people recover from other emotional traumas—a routine, an absorbing distraction, a daily purpose—are not readily available to the unemployed. Adam Levey The transition from labor force to leisure force would likely be particularly hard on Americans, the worker bees of the rich world: Between 1950 and 2012, annual hours worked per worker fell significantly throughout Europe—by about 40 percent in Germany and the Netherlands—but by only 10 percent in the United States. Richer, college-educated Americans are working more than they did 30 years ago, particularly when you count time working and answering e-mail at home. In 1989, the psychologists Mihaly Csikszentmihalyi and Judith LeFevre conducted a famous study of Chicago workers that found people at work often wished they were somewhere else. But in questionnaires, these same workers reported feeling better and less anxious in the office or at the plant than they did elsewhere. The two psychologists called this “the paradox of work”: many people are happier complaining about jobs than they are luxuriating in too much leisure. Other researchers have used the term guilty couch potato to describe people who use media to relax but often feel worthless when they reflect on their unproductive downtime. Contentment speaks in the present tense, but something more—pride—comes only in reflection on past accomplishments. The post-workists argue that Americans work so hard because their culture has conditioned them to feel guilty when they are not being productive, and that this guilt will fade as work ceases to be the norm. This might prove true, but it’s an untestable hypothesis. When I asked Hunnicutt what sort of modern community most resembles his ideal of a post-work society, he admitted, “I’m not sure that such a place exists.” Less passive and more nourishing forms of mass leisure could develop. Arguably, they already are developing. The Internet, social media, and gaming offer entertainments that are as easy to slip into as is watching TV, but all are more purposeful and often less isolating. Video games, despite the derision aimed at them, are vehicles for achievement of a sort. Jeremy Bailenson, a communications professor at Stanford, says that as virtual-reality technology improves, people’s “cyber-existence” will become as rich and social as their “real” life. Games in which users climb “into another person’s skin to embody his or her experiences firsthand” don’t just let people live out vicarious fantasies, he has argued, but also “help you live as somebody else to teach you empathy and pro-social skills.” But it’s hard to imagine that leisure could ever entirely fill the vacuum of accomplishment left by the demise of labor. Most people do need to achieve things through, yes, work to feel a lasting sense of purpose. To envision a future that offers more than minute-to-minute satisfaction, we have to imagine how millions of people might find meaningful work without formal wages. So, inspired by the predictions of one of America’s most famous labor economists, I took a detour on my way to Youngstown and stopped in Columbus, Ohio. 4. Communal Creativity: The Artisans’ Revenge Artisans made up the original American middle class. Before industrialization swept through the U.S. economy, many people who didn’t work on farms were silversmiths, blacksmiths, or woodworkers. These artisans were ground up by the machinery of mass production in the 20th century. But Lawrence Katz, a labor economist at Harvard, sees the next wave of automation returning us to an age of craftsmanship and artistry. In particular, he looks forward to the ramifications of 3‑D printing, whereby machines construct complex objects from digital designs. The factories that arose more than a century ago “could make Model Ts and forks and knives and mugs and glasses in a standardized, cheap way, and that drove the artisans out of business,” Katz told me. “But what if the new tech, like 3-D-printing machines, can do customized things that are almost as cheap? It’s possible that information technology and robots eliminate traditional jobs and make possible a new artisanal economy … an economy geared around self-expression, where people would do artistic things with their time.” In other words, it would be a future not of consumption but of creativity, as technology returns the tools of the assembly line to individuals, democratizing the means of mass production. Something like this future is already present in the small but growing number of industrial shops called “makerspaces” that have popped up in the United States and around the world. The Columbus Idea Foundry is the country’s largest such space, a cavernous converted shoe factory stocked with industrial-age machinery. Several hundred members pay a monthly fee to use its arsenal of machines to make gifts and jewelry; weld, finish, and paint; play with plasma cutters and work an angle grinder; or operate a lathe with a machinist. When I arrived there on a bitterly cold afternoon in February, a chalkboard standing on an easel by the door displayed three arrows, pointing toward bathrooms, pewter casting, and zombies. Near the entrance, three men with black fingertips and grease-stained shirts took turns fixing a 60-year-old metal-turning lathe. Behind them, a resident artist was tutoring an older woman on how to transfer her photographs onto a large canvas, while a couple of guys fed pizza pies into a propane-fired stone oven. Elsewhere, men in protective goggles welded a sign for a local chicken restaurant, while others punched codes into a computer-controlled laser-cutting machine. Beneath the din of drilling and wood-cutting, a Pandora rock station hummed tinnily from a Wi‑Fi-connected Edison phonograph horn. The foundry is not just a gymnasium of tools. It is a social center. Adam Levey Alex Bandar, who started the foundry after receiving a doctorate in materials science and engineering, has a theory about the rhythms of invention in American history. Over the past century, he told me, the economy has moved from hardware to software, from atoms to bits, and people have spent more time at work in front of screens. But as computers take over more tasks previously considered the province of humans, the pendulum will swing back from bits to atoms, at least when it comes to how people spend their days. Bandar thinks that a digitally preoccupied society will come to appreciate the pure and distinct pleasure of making things you can touch. “I’ve always wanted to usher in a new era of technology where robots do our bidding,” Bandar said. “If you have better batteries, better robotics, more dexterous manipulation, then it’s not a far stretch to say robots do most of the work. So what do we do? Play? Draw? Actually talk to each other again?” You don’t need any particular fondness for plasma cutters to see the beauty of an economy where tens of millions of people make things they enjoy making—whether physical or digital, in buildings or in online communities—and receive feedback and appreciation for their work. The Internet and the cheap availability of artistic tools have already empowered millions of people to produce culture from their living rooms. People upload more than 400,000 hours of YouTube videos and 350 million new Facebook photos every day. The demise of the formal economy could free many would-be artists, writers, and craftspeople to dedicate their time to creative interests—to live as cultural producers. Such activities offer virtues that many organizational psychologists consider central to satisfaction at work: independence, the chance to develop mastery, and a sense of purpose. After touring the foundry, I sat at a long table with several members, sharing the pizza that had come out of the communal oven. I asked them what they thought of their organization as a model for a future where automation reached further into the formal economy. A mixed-media artist named Kate Morgan said that most people she knew at the foundry would quit their jobs and use the foundry to start their own business if they could. Others spoke about the fundamental need to witness the outcome of one’s work, which was satisfied more deeply by craftsmanship than by other jobs they’d held. Late in the conversation, we were joined by Terry Griner, an engineer who had built miniature steam engines in his garage before Bandar invited him to join the foundry. His fingers were covered in soot, and he told me about the pride he had in his ability to fix things. “I’ve been working since I was 16. I’ve done food service, restaurant work, hospital work, and computer programming. I’ve done a lot of different jobs,” said Griner, who is now a divorced father. “But if we had a society that said, ‘We’ll cover your essentials, you can work in the shop,’ I think that would be utopia. That, to me, would be the best of all possible worlds.” 5. Contingency: “You’re on Your Own” One mile to the east of downtown Youngstown, in a brick building surrounded by several empty lots, is Royal Oaks, an iconic blue-collar dive. At about 5:30 p.m. on a Wednesday, the place was nearly full. The bar glowed yellow and green from the lights mounted along a wall. Old beer signs, trophies, masks, and mannequins cluttered the back corner of the main room, like party leftovers stuffed in an attic. The scene was mostly middle-aged men, some in groups, talking loudly about baseball and smelling vaguely of pot; some drank alone at the bar, sitting quietly or listening to music on headphones. I spoke with several patrons there who work as musicians, artists, or handymen; many did not hold a steady job. “It is the end of a particular kind of wage work,” said Hannah Woodroofe, a bartender there who, it turns out, is also a graduate student at the University of Chicago. (She’s writing a dissertation on Youngstown as a harbinger of the future of work.) A lot of people in the city make ends meet via “post-wage arrangements,” she said, working for tenancy or under the table, or trading services. Places like Royal Oaks are the new union halls: People go there not only to relax but also to find tradespeople for particular jobs, like auto repair. Others go to exchange fresh vegetables, grown in urban gardens they’ve created amid Youngstown’s vacant lots. When an entire area, like Youngstown, suffers from high and prolonged unemployment, problems caused by unemployment move beyond the personal sphere; widespread joblessness shatters neighborhoods and leaches away their civic spirit. John Russo, the Youngstown State professor, who is a co-author of a history of the city, Steeltown USA, says the local identity took a savage blow when residents lost the ability to find reliable employment. “I can’t stress this enough: this isn’t just about economics; it’s psychological,” he told me. Russo sees Youngstown as the leading edge of a larger trend toward the development of what he calls the “precariat”—a working class that swings from task to task in order to make ends meet and suffers a loss of labor rights, bargaining rights, and job security. In Youngstown, many of these workers have by now made their peace with insecurity and poverty by building an identity, and some measure of pride, around contingency. The faith they lost in institutions—the corporations that have abandoned the city, the police who have failed to keep them safe—has not returned. But Russo and Woodroofe both told me they put stock in their own independence. And so a place that once defined itself single-mindedly by the steel its residents made has gradually learned to embrace the valorization of well-rounded resourcefulness. Karen Schubert, a 54-year-old writer with two master’s degrees, accepted a part-time job as a hostess at a café in Youngstown early this year, after spending months searching for full-time work. Schubert, who has two grown children and an infant grandson, said she’d loved teaching writing and literature at the local university. But many colleges have replaced full-time professors with part-time adjuncts in order to control costs, and she’d found that with the hours she could get, adjunct teaching didn’t pay a living wage, so she’d stopped. “I think I would feel like a personal failure if I didn’t know that so many Americans have their leg caught in the same trap,” she said. Among Youngstown’s precariat, one can see a third possible future, where millions of people struggle for years to build a sense of purpose in the absence of formal jobs, and where entrepreneurship emerges out of necessity. But while it lacks the comforts of the consumption economy or the cultural richness of Lawrence Katz’s artisanal future, it is more complex than an outright dystopia. “There are young people working part-time in the new economy who feel independent, whose work and personal relationships are contingent, and say they like it like this—to have short hours so they have time to focus on their passions,” Russo said. Schubert’s wages at the café are not enough to live on, and in her spare time, she sells books of her poetry at readings and organizes gatherings of the literary-arts community in Youngstown, where other writers (many of them also underemployed) share their prose. The evaporation of work has deepened the local arts and music scene, several residents told me, because people who are inclined toward the arts have so much time to spend with one another. “We’re a devastatingly poor and hemorrhaging population, but the people who live here are fearless and creative and phenomenal,” Schubert said. Whether or not one has artistic ambitions as Schubert does, it is arguably growing easier to find short-term gigs or spot employment. Paradoxically, technology is the reason. A constellation of Internet-enabled companies matches available workers with quick jobs, most prominently including Uber (for drivers), Seamless (for meal deliverers), Homejoy (for house cleaners), and TaskRabbit (for just about anyone else). And online markets like Craigslist and eBay have likewise made it easier for people to take on small independent projects, such as furniture refurbishing. Although the on-demand economy is not yet a major part of the employment picture, the number of “temporary-help services” workers has grown by 50 percent since 2010, according to the Bureau of Labor Statistics. Some of these services, too, could be usurped, eventually, by machines. But on-demand apps also spread the work around by carving up jobs, like driving a taxi, into hundreds of little tasks, like a single drive, which allows more people to compete for smaller pieces of work. These new arrangements are already challenging the legal definitions of employer and employee, and there are many reasons to be ambivalent about them. But if the future involves a declining number of full-time jobs, as in Youngstown, then splitting some of the remaining work up among many part-time workers, instead of a few full-timers, wouldn’t necessarily be a bad development. We shouldn’t be too quick to excoriate companies that let people combine their work, art, and leisure in whatever ways they choose. Today the norm is to think about employment and unemployment as a black-and-white binary, rather than two points at opposite ends of a wide spectrum of working arrangements. As late as the mid-19th century, though, the modern concept of “unemployment” didn’t exist in the United States. Most people lived on farms, and while paid work came and went, home industry—canning, sewing, carpentry—was a constant. Even in the worst economic panics, people typically found productive things to do. The despondency and helplessness of unemployment were discovered, to the bafflement and dismay of cultural critics, only after factory work became dominant and cities swelled. The 21st century, if it presents fewer full-time jobs in the sectors that can be automated, could in this respect come to resemble the mid-19th century: an economy marked by episodic work across a range of activities, the loss of any one of which would not make somebody suddenly idle. Many bristle that contingent gigs offer a devil’s bargain—a bit of additional autonomy in exchange for a larger loss of security. But some might thrive in a market where versatility and hustle are rewarded—where there are, as in Youngstown, few jobs to have, yet many things to do. 6. Government: The Visible Hand In the 1950s, Henry Ford II, the CEO of Ford, and Walter Reuther, the head of the United Auto Workers union, were touring a new engine plant in Cleveland. Ford gestured to a fleet of machines and said, “Walter, how are you going to get these robots to pay union dues?” The union boss famously replied: “Henry, how are you going to get them to buy your cars?” As Martin Ford (no relation) writes in his new book, The Rise of the Robots, this story might be apocryphal, but its message is instructive. We’re pretty good at noticing the immediate effects of technology’s substituting for workers, such as fewer people on the factory floor. What’s harder is anticipating the second-order effects of this transformation, such as what happens to the consumer economy when you take away the consumers. Technological progress on the scale we’re imagining would usher in social and cultural changes that are almost impossible to fully envision. Consider just how fundamentally work has shaped America’s geography. Today’s coastal cities are a jumble of office buildings and residential space. Both are expensive and tightly constrained. But the decline of work would make many office buildings unnecessary. What might that mean for the vibrancy of urban areas? Would office space yield seamlessly to apartments, allowing more people to live more affordably in city centers and leaving the cities themselves just as lively? Or would we see vacant shells and spreading blight? Would big cities make sense at all if their role as highly sophisticated labor ecosystems were diminished? As the 40-hour workweek faded, the idea of a lengthy twice-daily commute would almost certainly strike future generations as an antiquated and baffling waste of time. But would those generations prefer to live on streets full of high-rises, or in smaller towns? Today, many working parents worry that they spend too many hours at the office. As full-time work declined, rearing children could become less overwhelming. And because job opportunities historically have spurred migration in the United States, we might see less of it; the diaspora of extended families could give way to more closely knitted clans. But if men and women lost their purpose and dignity as work went away, those families would nonetheless be troubled. The decline of the labor force would make our politics more contentious. Deciding how to tax profits and distribute income could become the most significant economic-policy debate in American history. In The Wealth of Nations, Adam Smith used the term invisible hand to refer to the order and social benefits that arise, surprisingly, from individuals’ selfish actions. But to preserve the consumer economy and the social fabric, governments might have to embrace what Haruhiko Kuroda, the governor of the Bank of Japan, has called the visible hand of economic intervention. What follows is an early sketch of how it all might work. In the near term, local governments might do well to create more and more-ambitious community centers or other public spaces where residents can meet, learn skills, bond around sports or crafts, and socialize. Two of the most common side effects of unemployment are loneliness, on the individual level, and the hollowing-out of community pride. A national policy that directed money toward centers in distressed areas might remedy the maladies of idleness, and form the beginnings of a long-term experiment on how to reengage people in their neighborhoods in the absence of full employment. We could also make it easier for people to start their own, small-scale (and even part-time) businesses. New-business formation has declined in the past few decades in all 50 states. One way to nurture fledgling ideas would be to build out a network of business incubators. Here Youngstown offers an unexpected model: its business incubator has been recognized internationally, and its success has brought new hope to West Federal Street, the city’s main drag. Near the beginning of any broad decline in job availability, the United States might take a lesson from Germany on job-sharing. The German government gives firms incentives to cut all their workers’ hours rather than lay off some of them during hard times. So a company with 50 workers that might otherwise lay off 10 people instead reduces everyone’s hours by 20 percent. Such a policy would help workers at established firms keep their attachment to the labor force despite the declining amount of overall labor. Spreading work in this way has its limits. Some jobs can’t be easily shared, and in any case, sharing jobs wouldn’t stop labor’s pie from shrinking: it would only apportion the slices differently. Eventually, Washington would have to somehow spread wealth, too. One way of doing that would be to more heavily tax the growing share of income going to the owners of capital, and use the money to cut checks to all adults. This idea—called a “universal basic income”—has received bipartisan support in the past. Many liberals currently support it, and in the 1960s, Richard Nixon and the conservative economist Milton Friedman each proposed a version of the idea. That history notwithstanding, the politics of universal income in a world without universal work would be daunting. The rich could say, with some accuracy, that their hard work was subsidizing the idleness of millions of “takers.” What’s more, although a universal income might replace lost wages, it would do little to preserve the social benefits of work. The most direct solution to the latter problem would be for the government to pay people to do something, rather than nothing. Although this smacks of old European socialism, or Depression-era “makework,” it might do the most to preserve virtues such as responsibility, agency, and industriousness. In the 1930s, the Works Progress Administration did more than rebuild the nation’s infrastructure. It hired 40,000 artists and other cultural workers to produce music and theater, murals and paintings, state and regional travel guides, and surveys of state records. It’s not impossible to imagine something like the WPA—or an effort even more capacious—for a post-work future. What might that look like? Several national projects might justify direct hiring, such as caring for a rising population of elderly people. But if the balance of work continues to shift toward the small-bore and episodic, the simplest way to help everybody stay busy might be government sponsorship of a national online marketplace of work (or, alternatively, a series of local ones, sponsored by local governments). Individuals could browse for large long-term projects, like cleaning up after a natural disaster, or small short-term ones: an hour of tutoring, an evening of entertainment, an art commission. The requests could come from local governments or community associations or nonprofit groups; from rich families seeking nannies or tutors; or from other individuals given some number of credits to “spend” on the site each year. To ensure a baseline level of attachment to the workforce, the government could pay adults a flat rate in return for some minimum level of activity on the site, but people could always earn more by taking on more gigs. Although a digital WPA might strike some people as a strange anachronism, it would be similar to a federalized version of Mechanical Turk, the popular Amazon sister site where individuals and companies post projects of varying complexity, while so-called Turks on the other end browse tasks and collect money for the ones they complete. Mechanical Turk was designed to list tasks that cannot be performed by a computer. (The name is an allusion to an 18th-century Austrian hoax, in which a famous automaton that seemed to play masterful chess concealed a human player who chose the moves and moved the pieces.) A government marketplace might likewise specialize in those tasks that required empathy, humanity, or a personal touch. By connecting millions of people in one central hub, it might even inspire what the technology writer Robin Sloan has called “a Cambrian explosion of mega-scale creative and intellectual pursuits, a generation of Wikipedia-scale projects that can ask their users for even deeper commitments.” Adam Levey There’s a case to be made for using the tools of government to provide other incentives as well, to help people avoid the typical traps of joblessness and build rich lives and vibrant communities. After all, the members of the Columbus Idea Foundry probably weren’t born with an innate love of lathe operation or laser-cutting. Mastering these skills requires discipline; discipline requires an education; and an education, for many people, involves the expectation that hours of often frustrating practice will eventually prove rewarding. In a post-work society, the financial rewards of education and training won’t be as obvious. This is a singular challenge of imagining a flourishing post-work society: How will people discover their talents, or the rewards that come from expertise, if they don’t see much incentive to develop either? Modest payments to young people for attending and completing college, skills-training programs, or community-center workshops might eventually be worth considering. This seems radical, but the aim would be conservative—to preserve the status quo of an educated and engaged society. Whatever their career opportunities, young people will still grow up to be citizens, neighbors, and even, episodically, workers. Nudges toward education and training might be particularly beneficial to men, who are more likely to withdraw into their living rooms when they become unemployed. 7. Jobs and Callings Decades from now, perhaps the 20th century will strike future historians as an aberration, with its religious devotion to overwork in a time of prosperity, its attenuations of family in service to job opportunity, its conflation of income with self-worth. The post-work society I’ve described holds a warped mirror up to today’s economy, but in many ways it reflects the forgotten norms of the mid-19th century—the artisan middle class, the primacy of local communities, and the unfamiliarity with widespread joblessness. The three potential futures of consumption, communal creativity, and contingency are not separate paths branching out from the present. They’re likely to intertwine and even influence one another. Entertainment will surely become more immersive and exert a gravitational pull on people without much to do. But if that’s all that happens, society will have failed. The foundry in Columbus shows how the “third places” in people’s lives (communities separate from their homes and offices) could become central to growing up, learning new skills, discovering passions. And with or without such places, many people will need to embrace the resourcefulness learned over time by cities like Youngstown, which, even if they seem like museum exhibits of an old economy, might foretell the future for many more cities in the next 25 years. On my last day in Youngstown, I met with Howard Jesko, a 60-year-old Youngstown State graduate student, at a burger joint along the main street. A few months after Black Friday in 1977, as a senior at Ohio State University, Jesko received a phone call from his father, a specialty-hose manufacturer near Youngstown. “Don’t bother coming back here for a job,” his dad said. “There aren’t going to be any left.” Years later, Jesko returned to Youngstown to work, but he recently quit his job selling products like waterproofing systems to construction companies; his customers had been devastated by the Great Recession and weren’t buying much anymore. Around the same time, a left-knee replacement due to degenerative arthritis resulted in a 10-day hospital stay, which gave him time to think about the future. Jesko decided to go back to school to become a professor. “My true calling,” he told me, “has always been to teach.” One theory of work holds that people tend to see themselves in jobs, careers, or callings. Individuals who say their work is “just a job” emphasize that they are working for money rather than aligning themselves with any higher purpose. Those with pure careerist ambitions are focused not only on income but also on the status that comes with promotions and the growing renown of their peers. But one pursues a calling not only for pay or status, but also for the intrinsic fulfillment of the work itself. When I think about the role that work plays in people’s self-esteem—particularly in America—the prospect of a no-work future seems hopeless. There is no universal basic income that can prevent the civic ruin of a country built on a handful of workers permanently subsidizing the idleness of tens of millions of people. But a future of less work still holds a glint of hope, because the necessity of salaried jobs now prevents so many from seeking immersive activities that they enjoy. After my conversation with Jesko, I walked back to my car to drive out of Youngstown. I thought about Jesko’s life as it might have been had Youngstown’s steel mills never given way to a steel museum—had the city continued to provide stable, predictable careers to its residents. If Jesko had taken a job in the steel industry, he might be preparing for retirement today. Instead, that industry collapsed and then, years later, another recession struck. The outcome of this cumulative grief is that Howard Jesko is not retiring at 60. He’s getting his master’s degree to become a teacher. It took the loss of so many jobs to force him to pursue the work he always wanted to do.
1
Dependent Types
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
Exploiting popular macOS apps with a single “.terminal” file
A story about macOS File Quarantine, 10-year old bug, privileged OneDrive entitlements and UX security. Vladimir Metnew p Follow 9 min read p Jul 27, 2020 -- 3 Listen Share Popular macOS apps with a file-sharing functionality didn’t delegate file quarantine to OS leading to File Quarantine bypass (Windows MOTW analogue) for downloaded files. The vulnerability has low/moderate impact, but it can be combined with other custom behaviours, and UX features to increase the severity. Many popular products like Keybase, Slack, Skype, Signal, Telegram decided to fix the issue, but the vulnerability remains unfixed in file-syncing apps: Dropbox, OneDrive, Google Drive, etc. Overall, this vulnerability has affected at least 20+ apps, a significant part of the macOS ecosystem. Apps from AppStore aren’t vulnerable to File Quarantine issues unless it’s OneDrive , which we’ll discuss later in this post. During the research, I also discovered two “insecure features” in macOS: dangerous handling of .fileloc and .url shortcut files, those allow executing arbitrary local files by the full path at shortcut file opening. This behaviour allowed me to discover two Chrome and Firefox bugs: CVE-2020–6797 , CVE-2020–6402 See slides for my talk on “Objective by The Sea v3” conference about File Quarantine issues. General Attack Scenario A macOS user downloads and opens a hello.terminal file received through the application. The user expects the downloaded file is quarantined and is tracked by Gatekeeper. The application neither delegates file quarantine process to OS, nor does it on its own. Boom! Malicious code executes on the device. UX Security You may notice that the scenario relies on user interaction and requires convincing the user to open the file with phishing. Unfortunately, the state of UX security in many chat apps is pretty bad, and it significantly increases risks for users. Some apps allow executing the malicious file in a single click on the file icon. For example, previous versions of Telegram has been downloading files automatically, without user consent. It’s funny that no one takes iMessage as a good example of secure file handling. iMessage shows the QuickLook preview of the file, instead of opening it. Another problem is that some product developers reject to think of secure file handling as a developer responsibility and shift security risks to users. (Such was a response from WhatsApp, for example). .terminal > GateKeeper .terminal file is not widely known. It’s a Terminal.app configuration profile and it doesn’t require +x file permission to execute a malicious payload on the device. How to fix it? Implementing a fix for this issue might be tricky because there are two possible options: Quarantine the downloaded file with xattr utility — set com.apple.quarantine attribute to a specific value (setting an arbitrary value also works). For example, Signal for macOS implements this option. Delegate quarantine to OS in Info.plist: OS will quarantine all files downloaded by this application. This approach is tricky, because OS may quarantine the new version of the app downloaded by the auto-update mechanism. For example, Brave browser had this issue in the past. Chat apps are safe now Overall, I sent around ~15 reports about file quarantine to different chat apps, messengers, and mail apps. Attacking file-syncing apps Most file-syncing apps sync files without quarantine attribute. In the case with file-syncing apps, the idea behind the attack scenario is to sync [1] the file into the user’s cloud file storage and later “somehow” launch [2] the synced file on the user’s device. In my opinion, the most interesting approach to “syncing” is to leverage App Folder integrations, that users can authorize via OAuth. Imagine, an attacker somehow hacks a file convertor and steals OAuth tokens. At this point, the attacker can sync anything to all machines owned by users who authorized the hacked OAuth app to write into their cloud folders. And these malicious files won’t have a quarantine attribute, once they’re synced to the device. The attacker still needs an inconspicuous way to execute the synced file on the device to bring minimal user attention. There is an option for that — .url shortcut file that can launch any file on the device by its full path. .fileloc file aka a bug that wasn’t patched in 10 years When Apple fixed .url file handling in Catalina, I started searching for a bypass and eventually found mentions of .fileloc file. The dangerous nature of .fileloc has been known since 2009: Incomplete blacklist vulnerability in Launch Services in Apple Mac OS X 10.5.8 allows user-assisted remote attackers to execute arbitrary code via a .fileloc file, which does not trigger a “potentially unsafe” warning message in the Quarantine feature. *10years-bug .fileloc still works on Catalina. Means, Apple doesn’t consider this feature to be a bug in macOS. Apple knows that it’s possible to execute files on the device with .fileloc. Apple also knows that all default apps have quarantine enabled. Launching a quarantined file with .fileloc doesn’t have security risks, because the user will be asked to confirm file launching. That means, .fileloc is not a vulnerability by itself unless there are files without a quarantine attribute. Here is a sample .fileloc file. .fileloc in browsers Chrome and Firefox didn’t have .fileloc in their safebrowsing blocklists. Thus, a browser extension with downloads permission could execute arbitrary files on the device. CVE-2020–6797 strong CVE-2020–6402 PoC screencast: h2 Edit description drive.google.com Dropbox I think Dropbox was aware of this issue for a long time, but they decided to leave this this unpatched, because the patch impacts UX (leading to fewer users and profits, consequently). OneDrive OneDrive is distributed on AppStore. But it doesn’t apply quarantine to the synced files. Why? h2 Normally, when a sandboxed app writes to a file, the file is quarantined. If the file is a shell script, then the… mjtsai.com “However, when an app such as TextEdit with the “com.apple.security.files.user-selected.executable” entitlement saves a file, it removes the quarantine extended attribute!” OneDrive removes quarantine meta-attribute because Apple granted itcom.apple.security.files.user-selected.executable entitlement. Here is MSRS’s response about file quarantine in OneDrive: The ask is for a mark-of-the-web (MOTW) to be attached to files that OneDrive syncs. We do not do this on either Windows or the Mac. On the Mac, attaching a quarantine/MOTW bit has a bunch of user experience impacts that break features like Files on Demand and in general are inappropriate for a sync client. At one point, the Mac store app had the quarantine bit set on files automatically by the system because it was sandboxed. This broke the aforementioned items. We asked for (and received) an exception from Apple’s head of macOS security to set an entitlement that does not cause the quarantine bit to be set. Apple’s position is generally that sync apps do not need to have MOTW/quarantine set on synced content. According to MSRC: Quarantine dialogs break UX. Apple “legitimately” granted OneDrive those entitlements. Apple’s head of macOS security made an exception for OneDrive 😯. In my opinion, this case is pretty similar to “Hey vs Apple” case. When big companies can obtain some specific permissions ignoring general platform rules. “Apple’s position is generally that sync apps do not need to have MOTW”. At the same time, Apple’s iCloud applies quarantine meta-attribute to synced files 🙃 : Hey Apple, we lose $PROFIT per year due to Quarantine dialogs. Do you have a hack for this?: Sure, apply these entitlements.: Cool. Microsoft Apple Microsoft Conclusions: Both TCC and Quarantine impact macOS UX heavily. Product companies don’t want to implement OS security features because it impacts their growth and profits. Apple is aware of .fileloc and implements secure file handling in default apps. At the same time, Apple grants entitlements to AppStore apps (OneDrive) to bypass quarantine. That makes .fileloc a security threat again. Also, Microsoft historically has issues with secure file handling in their products. It’s easy to prove by checking Chrome SafeBrowsing rules. You’ll notice that most “DANGEROUS” executable file types belong to Windows. Microsoft response to my report about quarantine issues in Skype was like: “ok, nice, bye. We will fix it”. No HoF, no bounty, no thanks. Response from bug hunters A few H1 reports about file quarantine reported by bug bounty hunters. When my reports to Brave Browser and Keybase were disclosed, I’ve noticed that bug hunters started reporting file quarantine issues to other macOS apps. Previously, I haven’t seen any mention of this technique on bug bounty programs. I want to say “Mahalo!” to these hunters for making macOS apps more secure. This also proves the vulnerability is easy to understand and reproduce. Also, I received thanks from bug hunters for making this technique public. I’m happy that my work helps other people! Big Sur = the end of .terminal epoch Assuming there are no similar files like .terminal, asking for a user confirmation upon file opening is a good way to prevent further abuses of .terminal. Nice fix, Apple. See the full list of files recognized by macOS in the slides. Open Questions Who should be responsible for secure file handling? Developers, platforms or users? What’s the best way to engage companies to implement security features? Is it acceptable to make “exceptions” for some companies? Is it a unique case in the Apple ecosystem? Is there a way to fix all problem with file quarantine? Do we need to solve them? Try to search for misconfigurations when researching macOS/iOS apps. (SMJobBless is also a good evidence of this theory) You can find more PoC screencasts in the slides. Slack Report on HackerOne. Keybase Repost on HackerOne. Keybase Team is awesome, they quickly responded to the request and awarded a good bounty. Quarantine was added to both macOS/Windows Keybase apps and most importantly to Keybase FileSystem (KBFS). I want to thank Keybase team for effective collaboration! Telegram Another Telegram bug Telegram for macOS allowed users to send links pointing to file:// origin. By combining this behaviour with insecure link opening (NSWorkspace.openURL), the attacker could send a crafted link opening of which would lead to code execution on the device. Yes, it’s the same bug as in Zoom. PoC was: file://host/Applications/Calculator.app/Contents/MacOS/Calculator References https://lapcatsoftware.com/articles/notarization.html https://eclecticlight.co/2019/04/26/%F0%9F%8E%97-quarantine-documents/ https://eclecticlight.co/2019/05/10/finder-security-errors-opening-documents-a-summary/ https://blog.malwarebytes.com/cybercrime/2015/10/bypassing-apples-gatekeeper/ Thanks Mahalo to “Objective by the Sea” for inviting me to this awesome conference! Thanks to authors of public resources about macOS security for sharing knowledge with others. Thanks to “hackers” who were open to discussing abovementioned vulnerabilities, behaviours and design decisions. P.S. As you see, File Quarantine is a huge subject for discussion that involves many different parties (security folks, end-users, product managers, platforms). File sharing is basic functionality that significantly impacts end-users. Obviously, any change to this functionality should be carefully examined by all involved parties. I try to treat this post as a summary of my research, that I was (asynchronously) doing for a long time. Right now I feel like I see the logical end of this problem. I don’t try to convince others to think that Quarantine should be enabled/disabled/changed. DMs are open — https://twitter.com/vladimir_metnew https://www.linkedin.com/in/metnew/
1
Fired Google Insider Speaks Out [video]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
Human cells grown in monkey embryos raise ethical concerns
A human-monkey blastocyst, an early stage of embryo development Weizhi Ji, Kunming University of Science and Technology Researchers have grown human cells in monkey embryos with the aim of understanding more about how cells develop and communicate with each other. Juan Carlos Izpisua Belmonte at the Salk Institute in California and his colleagues have produced what are known as human-monkey chimeras, with human stem cells – special cells that have the ability to develop into many different cell types – inserted in macaque embryos in petri dishes in the lab. However, some ethicists have raised concerns, saying this type of work “poses significant ethical and legal challenges”. Advertisement Chimeras are organisms whose cells come from two or more individuals. In humans, chimerism can naturally occur following organ transplants, where cells from that organ start growing in other parts of the body. Izpisua Belmonte says the team’s work could pave the way in addressing the severe shortage in transplantable organs, as well as help us understand more about early human development, disease progression and ageing. “These chimeric approaches could be really very useful for advancing biomedical research not just at the very earliest stage of life, but also the latest stage of life,” he said. In 2017, Izpisua Belmonte and his colleagues created the first human-pig chimera, where they incorporated human cells into early-stage pig tissue but found that human cells in this environment had poor molecular communication. So the team decided to investigate lab-grown chimeras using a more closely related species: macaques. Read more: Exclusive: Two pigs engineered to have monkey cells born in China The human-monkey chimeric embryos were monitored in the lab for 19 days before being destroyed. The team says the human stem cells “survived and integrated with better relative efficiency than in the previous experiments in pig tissue”. Izpisua Belmonte says the work meets current ethical and legal guidelines. “As important for health and research as we think these results are, the way we conducted this work, with utmost attention to ethical considerations and by coordinating closely with regulatory agencies, is equally important.” “This breakthrough reinforces an increasingly inescapable fact: biological categories are not fixed – they are fluid,” said Anna Smajdor at the University of East Anglia, UK, in a statement. “This poses significant ethical and legal challenges.” “The scientists behind this research state that these chimeric embryos offer new opportunities, because ‘we are unable to conduct certain types of experiments in humans’. But whether these embryos are human or not is open to question,” she said. Julian Savulescu at the University of Oxford said in a statement: “This research opens Pandora’s box to human-nonhuman chimeras. These embryos were destroyed at 20 days of development but it is only a matter of time before human-nonhuman chimeras are successfully developed, perhaps as a source of organs for humans. That is one of the long-term goals of this research.” “The key ethical question is: what is the moral status of these novel creatures?” he said. “Before any experiments are performed on live-born chimeras, or their organs extracted, it is essential that their mental capacities and lives are properly assessed.” Cell DOI: 10.1016/j.cell.2021.03.020 Sign up to our free Health Check newsletter for a round-up of all the health and fitness news you need to know, every Saturday
1
Fail of the Week: A Candle Caused Browns Ferry Nuclear Incident (2018)
A colleague of mine used to say he juggled a lot of balls; steel balls, plastic balls, glass balls, and paper balls. The trick was not to drop the glass balls. How do you know which is which? For example, suppose you were tasked with making sure a nuclear power plant was safe. What would be important? A fail-safe way to drop the control rods into the pile, maybe? A thick containment wall? Two loops of cooling so that only the inner loop gets radioactive? I’m not a nuclear engineer, so I don’t know, but ensuring electricians at a nuclear plant aren’t using open flames wouldn’t be high on my list of concerns. You might think that’s really obvious, but it turns out if you look at history that was a glass ball that got dropped. In the 1960s and 70s, there was a lot of optimism in the United States about nuclear power. Browns Ferry — a Tennessee Valley Authority (TVA) nuclear plant — broke ground in 1966 on two plants. Unit 1 began operations in 1974, and Unit 2 the following year. By 1975, the two units were producing about 2,200 megawatts of electricity. That same year, an electrical inspector and an electrician were checking for air leaks in the spreading room — a space where control cables split to go to the two different units from a single control room.  To find the air drafts they used a lit candle and would observe the flame as it was sucked in with the draft. In the process, they accidentally started a fire that nearly led to a massive nuclear disaster. You can build walls 30 inches thick, but you still need to get utilities in and out of the area. This was the case in the spreading room — the area where cables from all over the plant converged on the common control room. The workers found a 2×4 inch opening near a cable tray. They stuffed the hole with foam and checked it again. There was still a draft and the flame was sucked into the hole, lighting the foam on fire. The inspector tried to knock out the fire, first with a flashlight and then with rags. By this time, the wall was on fire and several fire extinguishers were used to attack the problem but without success. The fire burned on. In fact, the fire extinguishers may have blown burning material out of the hole, making it even worse. Because of the efforts to put it out, the fire wasn’t officially reported for 15 minutes. There was also confusion about what phone number to use to report the fire. Perhaps most surprising is that for whatever reason, the operators elected to continue running the reactors despite the fire. According to the official report they then noticed that pumps in the emergency core cooling system were running: Control board indicating lights were randomly glowing brightly, dimming, and going out; numerous alarms occurring; and smoke coming from beneath panel 9-3, which is the control panel for the emergency core cooling system (ECCS). The operator shut down equipment that he determined was not needed, only to have them restart again. I wouldn’t operate my car like that, much less a nuclear reactor. After a few restarts, they started talking about shutting things down. Just then, the power output of unit 1 dropped for no apparent reason. They reduced the flow on the operating pumps which then promptly failed. Finally, the operators dropped the control rods to shut down the nuclear reaction. As you might expect, shutting down a reactor isn’t quick and easy. Electrical supply was lost to several systems in unit 1 including several key instrument and cooling systems. In unit 2, the panels were going crazy and there were many alarms. Then about 10 minutes after the unit 1 reactor started dropping its output, unit 2 followed suit. Unfortunately, the equipment failed there too and they lost emergency cooling and control of some relief valves. Unit 1 was struggling with very little instrumentation and a reduced number of relief valves. The fear was that if the core did not remain submerged in water, it would melt down. To keep the core underwater, they used the relief valves to drop the internal pressure from 1020 PSI to under 350 PSI so that a low-pressure pump could force water into the chamber. This decision was met with yet another problem; the low-pressure pumps were not working either so they had to rig up a workaround using a different pump. In unit 1, the water level was normally about 200 inches above the top of the core, but it fell to about 48 inches. Unit 2 had more pump capacity, but it still wasn’t enough. They rigged up the same makeshift pump arrangement. Before this all began, unit 2’s computer already happened to be down, and the unit 1 computer soon failed. With nearly all the instrumentation having failed, and the diesel generators down, they had very little on-site power. The phone system failed, preventing the control room from making outbound calls which were being used to send instructions to people operating valves and other key equipment manually. Meanwhile, the fire was still burning. There was a built-in extinguisher that could be manually activated with a crank. But during construction, those activation cranks all had metal plates placed under them to prevent accidental activation of the extinguisher system. Almost none of the plates had been removed when construction was complete. By the time they were finally able to operate the system, it didn’t stop the fire completely and had the effect of driving thick smoke into the control room. Two workers were tapped to investigate. They put on breathing gear and went into the spreader room to find that the neoprene covers on the cable were burning and emitting a thick black smoke. The quarters were cramped and one man described having to take the air cylinders off his back and push them along with the fire extinguisher in front of him to get under the trays about 30 feet to reach the flames. The extinguisher system wasn’t the only safety equipment that was ill-prepared for an emergency. Many of the breathing masks at the plant were not working. Some had improperly filled tanks and others were missing parts. The main tank on site was apparently low on pressure and unable to completely fill the working tanks which resulted in about 18 minutes of air per fill for those trying to fight the fire. The local fire department arrived on the scene but they were not allowed to run the effort — presumably because you want people with specific training to fight a fire in a reactor. However, the fire chief did repeatedly suggest that water was the right way to put out the fire, as it wasn’t actually electrical in nature. However, plant management didn’t agree. After the fire burned for over six hours, the plant personnel decided to try water. Unfortunately, the fire hose didn’t deploy fully so they were getting low pressure. In the heat of the moment, the workers erroneously decided the nozzle was defective and borrowed one from the fire department, but it had incompatible threads and would not stay on the hose. Even with these problems, water had the fire out in 20 minutes. You might think the fire being out is the end of the story, but no. The damage had been done — control of the two reactors was greatly inhibited and keeping the cores cool remained an emergency situation. The relief valves on unit 1 finally quit and pressure went up beyond the ability of the makeshift pump system to operate. There was an ancillary pump operating, but it couldn’t keep up and a meltdown seemed likely. In retrospect, there was a way to use some of unit 2’s equipment but no one figured that out at the time. Instead, it was luck that they were able to make repairs before time ran out. Workers fought to get the pressure valves back online and succeeded. This allowed the pressure to drop enough for the pump to continue providing fresh water. The candle flame started fire about 12:20 PM. Unit one reached full shutdown at 4 AM and that was the end of it. As much as it sounds like everything went wrong, it was even worse. There were a host of problems with equipment ranging from lights to tape recorders. Speaking of tape recorders, there was one really interesting phone conversation between J. R. Calhoun, the chef of TVA’s Nuclear Generation Branch at the time and Frank Long of the NRC (and reported by a Canadian website): Calhoun: Yah, you know everything for those two units comes through that one room. It’s common to both units, just like the control room is common to both units. Long: That sorta shoots your redundancy. Thanks to creative problem solving, it appears the incident didn’t pose a public risk — although many people have critiqued how the public was kept informed (or not). There was never any radioactive leakage from the plant reported. So many questions. Why were they using candles when other methods were available? Why were they using flammable material as insulation? The investigation turned up that flawed tests indicated the polyurethane used in the foam was resistant to fire… in solid form. However, the foam was highly flammable and many people knew this. Many people didn’t know that candles were used for leak detection. Perhaps the worst bit of news is that two days earlier a similar fire had started but was put out quickly. The shift engineers had a meeting and had already decided to recommend a different way to test for leaks that didn’t require candles. But nothing had been done. Needless to say, the Nuclear Regulatory Committee made many changes to their fire protection standards and mandated silicone foam for firestops. It even influenced practices in other industries, too. If you want to read the DVD the NRC released about the incident in 2009, you can. The fire caused about $10 million of direct loss and as much as  $500 million in indirect costs. (That’s about $44 million and $2.2 billion dollars if you update to 1976 figures for 2018 value.) It took about 1000 man-years of effort during the 18-month recovery process. I know the debate over if we should have nuclear power or not is polarizing and I won’t tackle that here. But it is amazing that a high tech piece of equipment — no matter what it does — could be taken down by a candle and some bad procedures. You know there were all sorts of safety devices and procedures and that everyone there must have known the possible consequence of something going wrong. Yet you had a known fire problem ignored, bad air and fire equipment, and a host of other problems. So think about not only what balls you have in the air with each project, but ask yourself which of those are glass balls. Don’t forget to focus on the small seemingly inconsequential things. There’s also a danger in assuming that you don’t need at least some understanding of all the balls in the system. After all, if someone high up had realized foam caught fire and workers were using candles around it, this might have been a different story.
3
Piku experiments with “build a web app fast” prototyping
{{ message }} amontalenti/webappfast-piku You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Tutorial: Start Collecting NFTs on the Wax Blockchain
Hedgehog correspondent Jonathan Hope walks you through setting up a Wax Cloud Wallet and acquiring your first NFT! Choose wisely and you can even start "playing to earn" in multiplayer video games that use NFTs. Keep reading to learn how. Non-fungible tokens (NFTs) have exploded in popularity within the crypto space, finding all kinds of uses. Ranging in price from under a penny to over a million dollars, NFTs have created a multibillion-dollar market. The "nonfungible" part indicates that each NFT is unique — they are not interchangeable with each other. Quickly, what is an NFT and why should we care? The short version is that they are data files containing content — or linking to content — like a photo, video, or audio file that is cataloged or "minted" onto a blockchain, giving it something similar to a certificate of authenticity. Because the ledger of a blockchain cannot be changed this certificate follows that NFT forever. From the creator to anyone who receives the NFT, there is traceable accounting back to the source. With this technology, artists can ensure that consumers are getting their official work and the consumer can verify its authenticity. Gaming has found utility functions in holding NFTs in a wallet. Much like a card game, different NFTs provide different uses and resources. Companies have found ways of rewarding customers through loyalty programs that incentivize activity by offering NFT exclusives and tiered access to resources using NFTs as keys. That brings us to the WAX blockchain. WAX stands for Worldwide Asset eXchange; the protocol's purpose is to make it easy, cheap, and eco-friendly to buy/sell and use NFTs. WAX is compatible with various platforms, but the community's favorite exchange is Atomic Hub, where you can do all of your buying, selling, and trading. The fees for using WAX are low enough for anyone to participate and explore. The first step is creating your WAX Cloud Wallet. This will store your WAX Coins and your NFT inventory. Get started at https://all-access.wax.io/. After entering your email and choosing a password, you will need to verify your email address. Once you have activated your account through the email sent, you will be prompted to log in using your email/password and accept the terms of service. Congratulations! You have set up your WAX Cloud Wallet and are ready to begin trading NFTs. The login defaults to the dashboard page where you can find your provided wallet address. Every WAX Cloud Wallet Address concludes with .wam. You can find your address in the top right corner of the dashboard page. In the example above the wallet address is yosxy.wam. No long string of random numbers and letters, just the simple name plus .wam at the end. The next step is to fund your WAX Cloud Wallet with WAX coins. One very cool feature of this platform is all of the ways you can do this. In the top left hand corner of the Dashboard you will discover the "Buy WAXP" link to open the menu for funding your wallet. Below is an example of a few ways you can load your wallet. This includes a very easy card/bank option: It may take a few minutes to see your funds populate into your account. When it does you will notice the balance update in the top left corner of the dashboard. With funds in your wallet, you are ready to grab some NFTs! Next, get your Wax Wallet connected to the NFT exchange Atomic Hub, found here: https://wax.atomichub.io/. Choose the login link at the top right of the homepage. Then, select the WAX Cloud Wallet feature to login. If you are using the same browser, the platform will sync to your WAX Cloud Wallet and prompt a permission to approve this connection. Otherwise you may need to log in again. After you approve, your wallet address will populate in the top right corner of the page. Now you are ready to go! The market section of Atomic Hub is a great place to start exploring the NFTs available for sale. You have the ability to filter NFTs created by specific artists, games, or collections on the left side of the page. The filter functionality also provides advanced options based on rarity, value, and newly listed. My favorite is the Robotech collection, because of the awesome artwork and absolute nostalgia. When you discover an NFT that you like, check the details section for all of the info about it: This information tells you the creator, the seller, how many were made, how many have been burned, and the price for this NFT. The basic idea is the fewer of the NFT minted (rarity) or the more utility it offers, the more valuable it becomes. All of the NFTs purchased on Atomic Hub can be viewed in your inventory and in your WAX Cloud Wallet. As with all crypto endeavors, it is important to do your own research (DYOR) and know how to protect yourself from scams. Fortunately, Atomic Hub provides some features to help with that. The platform has a "whitelisting" process to help verify quality creators and label them easily. For a creator to become whitelisted on Atomic Hub they have to prove that their art is original, has a community following, a clear theme, good details in the descriptions, and social media links with good history. However, Atomic Hub does not absolutely guarantee that whitelisted projects are "legit," so be sure to doublecheck. The default setting on the Atomic Hub marketplace is only whitelisted NFTs. If you want to disable this feature and research the NFT yourself, uncheck this box: The whitelisting feature does some of the legwork for you. If you want to be certain or browse NFTs that are not yet whitelisted, there are a few things you can do. The first is to verify what collection your particular NFT came from. Using the Robotech NFT from earlier you can tell via a verifiable link what collection the NFT was created. Clicking the link in the top right corner under the Collection Name in the details section (with a check marked verifier) you get information on the collection creator. The link opens the collection details: In this particular example the collection has a URL link to the project page which adds another layer of verification. When looking at a scam page the URL will often go to a poorly developed page with no links to community pages like Twitter, Telegram, Medium, or Instagram. More sophisticated scammers will have links to social pages but will have very little engagement or poorly written descriptions. You can also perform a reverse image search using Google or another tool, to discover whether or where the image has been used online in the past. At the center of verifying your NFT is checking the reputation of the collection and the creator. You want to see community activity, history, and a description that makes sense. A really easy way to start exploring verified content is using the WAX content partners via the Wax Digital website. There are some very recognizable brands already like Topps, Hot Wheels, Atari, Power Rangers, Street Fighter, Weezer, and of course Robotech! There is a lot of buzz around "play to earn" games using NFTs. Many of the top games like Alien Worlds use the WAX Blockchain. This is a very fun way to use NFTs and to generate income in some cases. These games take a varying amount of investment to participate in. An accessible game to try out that doesn't break the bank is AlienShips.io, with your first utility NFT being offered at around a dollar on the Atomic Hub marketplace: Filter by alienships.io in the market filter and purchase a ship NFT. Once it is showing in your inventory you are ready to play! The game connects directly to your WAX Cloud Wallet: There are several game modes and tiers based on what kind of ships, materials, or utilities you carry in your wallet. The basic combat game mode shown below can be played using the inexpensive ship NFT. Art and gaming is just the beginning of the NFT tidal wave in crypto. Ownership of unique content is attracting everyone from digital artists, musicians, and companies of all sizes. Tradable NFTs that unlock physical items to be shipped to the owner's location are also on WAX's roadmap.  Other crypto projects are rewarding NFT holders with future airdrops of tokens and access to exclusive services. Like most things in the Crypto world NFTs hold their value in their scarcity in relation to demand. This particular space is still very early in its adoption and the WAX ecosystem is a great way to get your feet wet. Happy hunting! Enjoy the post? Check out Jonathan's website, follow him on Twitter, read his blog, and of course subscribe to the weekly Hedgehog newsletter!
308
Amazon Argues Users Don't Own Purchased Prime Video Content
THR, Esq Amazon Argues Users Don’t Actually Own Purchased Prime Video Content Amazon is asking a California federal judge to dismiss a suit over purchased Prime Video content, and it's arguing that when a user buys content on the platform, what they're really paying for is a limited license for “on-demand viewing over an indefinite period of time.” Logo text When an Amazon Prime Video user buys content on the platform, what they’re really paying for is a limited license for “on-demand viewing over an indefinite period of time” and they’re warned of that in the company’s terms of use. That’s the company’s argument for why a lawsuit over hypothetical future deletions of content should be dismissed. In April, Amanda Caudel sued Amazon for unfair competition and false advertising. She claims the company “secretly reserves the right” to end consumers’ access to content purchased through its Prime Video service. She filed her putative class action on behalf of herself and any California residents who purchased video content from the service from April 25, 2016, to present. On Monday, Amazon filed a motion to dismiss her complaint arguing that she lacks standing to sue because she hasn’t been injured — and noting that she’s purchased 13 titles on Prime since filing her complaint. “Plaintiff claims that Defendant Amazon’s Prime Video service, which allows consumers to purchase video content for streaming or download, misleads consumers because sometimes that video content might later become unavailable if a third-party rights’ holder revokes or modifies Amazon’s license,” writes attorney David Biderman in the motion, which is posted below. “The Complaint points vaguely to online commentary about this alleged potential harm but does not identify any Prime Video purchase unavailable to Plaintiff herself. In fact, all of the Prime Video content that Plaintiff has ever purchased remains available.” Further, Amazon argues, the site’s required user agreements explain that some content may later become unavailable. “The most relevant agreement here — the Prime Video Terms of Use — is presented to consumers every time they buy digital content on Amazon Prime Video,” writes Biderman. “These Terms of Use expressly state that purchasers obtain only a limited license to view video content and that purchased content may become unavailable due to provider license restriction or other reasons.” Amazon argues it doesn’t matter whether Caudel actually bothered to read the fine print. “An individual does not need to read an agreement in order to be bound by it,” writes Biderman. “A merchant term of service agreement in an online consumer transaction is valid and enforceable when the consumer had reasonable notice of the terms of service.” THR Newsletters Sign up for THR news straight to your inbox every day
1
Why Do We Need DApps?
MEVerse p Follow Published in MEVerse p 3 min read p Nov 12, 2020 -- Listen Share With the entry on Blockchain technology, many users have found the recent development of the technology and use of decentralized applications, commonly known as DApps. However, for the average user, the difference between centralized and decentralized applications can go unnoticed. Its visible characteristics are notable at the developer level even though they have motivated thousands of DApss today, ranging from gaming to finance. As many people do not recognize they are actually using DApps, they may think that DApps are unnecessary. However, if you want a high level of security and control of your applications, DApps can help you a lot. Average users will experience having ownership of their personal information when using DApps. Many companies use and sell your data stored on their servers without users’ consent, in addition to the security breaches. There is no doubt that this point is critical when selecting between an APP or a DApp. When you are using a financial application, under normal conditions, the database depends on servers located in the company in question or in a third-party provider’s cloud. It may result in the exposure of data if the servers are vulnerable enough. On the other hand, with the use of DApps, the chances of data or identity theft decrease, as long as the user uses the protocols properly. Trust and Decentralization Trust is earned and priceless. Given that DApps use open source, it is easy to inspect the code lines at any time by any person or entity, making applications transparent and auditable by third parties, which eventually avoid identity theft or monetary. Undoubtedly, this feature of the blockchain allows users to control what they are interacting with. But, since many of the users are ordinary people without programming knowledge, security companies like Certik are in charge of making the job easier and more transparent, allowing you to add that extra layer of security to DApps. On the other hand, the decentralized feature of DApps allows them to be independent of centralized servers or networks, generating typical problems such as censorship or inactivity due to single central point failures. As they depend on various nodes that support the Blockchain network on which the particular application runs, the multiple problems to which traditional applications are exposed are no longer a headache for DApps. As blockchain is a new technology, there are similarities to the nineties’ Internet era; users need more time to get accustomed to it. There is no doubt that we will see a significant advance in the DApp development, use, and management of DApps among the millions of Blockchain technology enthusiasts in the coming years. FLETA is a blockchain platform that aims to offer infrastructure that can be applied to real-world business models. FLETA has its own core blockchain technologies like Level Tree Validation, Parallel Sharding, Independent multi-chain Structure, Block Redesign, and PoF(Proof-of-Formulation) which is its own consensus algorithm. With them, it aims to solve problems that existing platforms have such as slow speeds, scalability limitation, and excessive fees and provide a flexible development environment. Moreover, through its Gateway technology, it enhanced its interoperability by allowing projects issuing their tokens through other mainnets to maintain their mainnets while using FLETA chain. Sendsquare, a foundation that developed FLETA project, was selected as one of the blockchain PoC support projects by the National IT Industry Promotion Agency (NIPA) of the South Korean Government since 2019 and has developed the blockchain-based on-chain clinical data management system (eCRF System) and RWD basic clinical research analysis report platform using blockchain technology. Feel free to join and connect with us through any of our official channels below: Website: https://fleta.io FLETA Store: https://fleta.ogn.app/#/ Twitter: https://twitter.com/fletachain Telegram: https://t.me/FLETAofficialGroup Github: https://github.com/fletaio
5
The Machinists Who Keep The New York Times Running (2016) [video]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
College Football Recruitment’s Biggest Problem Is Our Definition of It
Brendan Cahill p Follow 4 min read p Jul 29, 2020 -- Listen Share The college football offer has become the god of the American high school football recruit — the sport’s biblical “golden bull” so-to-speak. Yet, the concepts that are pedaled about college football recruiting are getting in the way of recruits actually enjoying their recruiting process. I believe that all pain in college football recruiting is a simple lack of touch with reality. Happiness, therefore, would be a simple re-acquaintance with reality. Reality for me isn’t reality for you. And, as much as I might know about my experience of 5+ years in the college football recruiting game, I can’t see your life from your perspective. So, to help the only thing I or really and coach can truly offer are concepts. Concepts, ideas and opinions can be helpful. Concepts point us in the approximate direction of reality, but cannot themselves tell us what it is for us. A good concept can get you in the right neighborhood, but it cannot give you the exact address for what’s ultimately going to be best for you. The major concept pedaled by the college football recruiting industry is that if you get ranked you will get offered. The second major concept pedaled is that if you train with a renown private coach with connections they will get you an offer. The last major concept peddled is that if you sign up for a recruiting service like NCSA or maybe even this that more accurate information will get you an offer. The hurdle we run into with these major recruiting concepts is when they stop being concepts and start being treated as gospel by both those pedaling them and those being pedaled to. Gospel implies infallibility, or truth. Truth, however, is something that can only be known individually. But, to create the veneer of infallibility elementary statistical theater is performed: “98.2% of all FBS players train with us”, “voted #1 camp by parents”, or “over 99.99999% of trainees receive a college football offer to the college of their choice.” You in your day-to-day grind have very limited bandwidth to breakdown how trustworthy the statistical theater played by the college football recruiting industry is, so I will attempt it for you. Saying that “98.2% of all FBS players train with us” has a few marketing problems. The main problem being how does this camp define train? To have trained with this camp did a camper go to one event only or did they train one-on-one with this camp crew for years on end and thus, were a true product of their coaching expertise? Claiming that you are “voted #1 [insert camp] in America by parents” is equally vague. Who exactly voted to make this camp number one? How many parents actually voted? How was the voting conducted? And, which parents did the voting? Lastly, saying that 99.999% of all kids who train with you go to a college of their choice or receive an offer is equally vague. Any college football player who puts in enough effort over the course of a 4–6 month period will get an offer. There are 800+ college football programs in the country. Saying “college of their choice” is technically correct in that any college that your client chooses to go to was by their choice but it doesn’t mean it was their dream school. In a word, there is plenty of correlation and skimpy causation in the connection between any particular college recruiting service and the results they give you. My response to these concepts has been to come up with my own counter-concepts. You can’t play college football if you can’t get into the school academically. You can’t play big-time football unless you are a big-time talent and big-time talent doesn’t need camps to be discovered. And, you can easily become a self marketing machine and generate your own offers given consistent effort over the course of 3–6 months. My own counter-concepts are biased in favor of my own experiences. I went to a ranking camp as a 16 year old kid and was utterly ignored by coaches for three days while they catered to only the top 5 kids at a 100+ person camp as my first kicking experience. I didn’t grow up with Twitter being a tool in college football recruiting so I made phone calls, emailed and sometimes just showed up unannounced at a college football coach’s office to introduce myself to get recruited. I wasn’t a big-time talent so I knew I needed to have big-time grades and oversized work ethic to compensate for the lack of recruiting attention. I also played at a micro sized 1200-student D3 school, Hartwick College, and had an awesome time. As parents reading this I think the single best thing you can do in the college recruiting process is to collect concepts from everyone you can, deploy them quickly and see whether or not you are finding them productive. It could be you find a great ranking camp that blows your mind! Continue training with those guys. It could be you have a great local coach who indeed does have connections to the schools you like. Ultimately, the greatest barrier to success in college football recruiting is our collectively flawed conception of what we think it is. Whatever concepts you use to guide your college football recruiting process they ultimately must be yours and no one else’s.
3
Mini Japanese Phrasebook with Audio
Sign up for lessons by e-mail and book release notifications. p by Japanese Complete Greetings Expressing Thanks After You Here you go (offering) Apologizing Excuse me / Pardon Mealtime / Meals Where are you from? What's your name? Casual or Plain(for use with same-age friends) Formal or Standard-Polite (for use with teachers, elders, new acquaintances) Greetings おはよう [ohayо̄] Good Morning こんにちは [konnichiwa] Good Day こんばんは [kombanwa] Good Evening/Night おやすみ [oyasumi] Good Night/Sweet Dreams おはようございます [ohayо̄] Good Morning こんにちは [konnichiwa] Good Day こんばんは [kombanwa] Good Evening/Night おやすみなさい [oyasuminasai] Good Night/Sweet Dreams Expressing Thanks ありがとう [arigatо̄] Thank you どうもありがとう [dо̄mo arigatо̄] Thank you very much ありがとうございます [arigatо̄ gozaimasu] Thank you very much どうもありがとうございます [dо̄mo arigatо̄ gozaimasu] Thank you very much After You Casual-Plain おさきに [osaki ni] After you Standard-Polite おさきにどうぞ [osaki ni dо̄zo] Please, after you Here you go (offering) どうぞ [dо̄zo] Here you go / here, have this. Apologizing ごめん [gomen] Mea culpa / my bad. ごめんなさい [gomen'nasai] Sorry for my error Excuse me / Pardon すみません [sumimasen] Excuse me, pardon me しつれいします [shitsureishimasu] Excuse the intrusion Pardon my exit Mealtime / Meals いただきます [itadakimasu] We humbly receive (this meal) (said before digging in) ごちそうさまでした [gochisо̄sama deshta] The meal was satisfying and fantastic. (said on finishing a meal) Where are you from? どこからきた? [doko kara kita?] From where did you arrive? どこからきましたか。 [doko kara kimashta ka] From where do you hail? What's your name? おなまえは? [o-namae-wa?] Your name is...? おなまえはなんですか。 [o-namae-wa-nan-dess-ka] What is your name? Read about the Japanese Complete System Master the 777 most-frequent kanji based on Frequency Analysis Develop a deep and indelible intuition for Japanese grammar and sentence structure Lay a solid foundation for immersion, work, study, and play in Japan. Check out the 777 most frequent kanji Learn more about Japanese Complete
3
Remnants of the Precursors is an open source modernization of Master of Orion
{{ message }} rayfowler/rotp-public You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
3
Hypergrowth
Today, we announced our $150M Series D at a $2.5B valuation. I joined Vercel a year and a half ago. Before Vercel, I'd never worked at a hypergrowth company. When you're in an environment with exponential company growth, you're also likely to have exponential personal growth. To visualize growth, here's a timeline of my work at Vercel alongside major events: And that brings us to today: Here's some thoughts on my past year and a half at Vercel. In 2017, I heard about this company making it simple to deploy your website. Just one command in the CLI and you're online. I remember checking out their product and feeling connected. This sleek, well-designed tool was for me. It helped me become a better developer and I was hooked. Joining Vercel came at the right time. I felt stagnant in my career, longing to spend more time being a creator and less time as a product engineer. I was excited by the Vercel product and even more excited about the team. So, I made the jump and joined. I joined as employee #34. I quickly got to know everyone and helped wherever possible. Hypergrowth feels like there's more water coming into the boat than you have plugs to fill the holes with. I was able to help with engineering, product, design, presales, support, you name it. I helped our customers be successful using Vercel and Next.js. As the company grew, I was able to double down and focus on what I do best: educating developers and building developer communities. We hired more engineers, more product managers, more engineering managers, more customer success managers, etc. Our customers succeeded and grew, and we grew with them. Hiring is incredibly difficult. When I moved into leading DevRel, I knew I needed to grow a team, but first I had to create the foundation. I tried to document as much of my process and workflow as possible. In an ideal situation, I'm able to “clone” myself with my first hire. They learn the tips and tricks I use to handle my job effectively. Then, for the next hire, we have two people to help influence. And so on (thanks, Delba!). For me, the hardest part of growing a team is delegation. It's hard to give up ownership over something you've carefully crafted or implemented. It requires an immense level of trust in those you're delegating responsibility and ownership to. This comes back to hiring. Finding people I could trust, and then working to strengthen that trust, has made our team highly effective. It's still early days for DevRel at Vercel, but I'm extremely proud of what we've accomplished. Some recent highlights: I joined Vercel without having worked remotely full-time (only part-time). I was incredibly scared about missing the in-person parts of work I enjoyed: being able to work on hard problems together with someone in the same room, the easy opportunities to chat about non-work stuff and learn more about my co-workers, and the clear separation between home and work. Now, these are all solvable problems in a remote world. But they're not easy. Both myself and the company I work for have to be intentional about not losing the best parts about being in person. Especially with a globally distributed team. In one of my first jobs, I'd meet up with three other people to carpool every morning. Most mornings, the last thing I wanted was to make small talk in the car before work. I wanted to sleep. So, I would struggle to get 30 minutes more of sleep before going into my 8 hour day. Then, back to the carpool for the ride home. Rinse and repeat. Reflecting on this, I couldn't fathom how unhealthy it sounded for me. I wanted more flexibility with my schedule. Some days, I wanted to get that extra 30 minutes of sleep and then stay an hour later. Other days, I wanted to leave as soon as possible and enjoy the nice weather. But I was bound to the shackles of the 9 to 5 and my limiting carpool. I thought about the mornings I'd get up and hit snooze for five more minutes. Then, I contrasted to today. Today, I wake up depending on my schedule for the day. Sometimes I'll sleep in a bit later, others I'll wake up extra early and go to the gym. When I wake up, there's no carpool. There's no 30 minutes of forced sleep and 30 minutes of small talk on the ride home. I just walk to my office in the other room. I can design my day to best fit my life. I can make accommodations for my life first and work second. The flexibility of remote work is a privilege and one that I am thankful for every day. For Vercel, we're just getting started. Earlier this month, Rich Harris joined to work on Svelte full-time. Many more exciting announcements are coming in the following months. As outlined in our Series D post, we're going to use this funding to: I've grown more as a developer and creator in the past year and a half than in the previous five. I'm excited to keep educating and helping developers succeed with React, Svelte, or any frontend framework of their choice.
3
Gen X and Gen Y Health Declining compared to parents at same age (U.S. Study)
Recent generations show a worrying decline in health compared to their parents and grandparents when they were the same age, a new national study reveals. Researchers found that, compared to previous generations, members of Generation X and Generation Y showed poorer physical health, higher levels of unhealthy behaviors such as alcohol use and smoking, and more depression and anxiety. The results suggest the likelihood of higher levels of diseases and more deaths in younger generations than we have seen in the past, said Hui Zheng, lead author of the study and professor of sociology at The Ohio State University. "The worsening health profiles we found in Gen X and Gen Y is alarming," Zheng said. "If we don't find a way to slow this trend, we are potentially going to see an expansion of morbidity and mortality rates in the United States as these generations get older." Zheng conducted the study with Paola Echave, a graduate student in sociology at Ohio State. The results were published yesterday (March 18, 2021) in the American Journal of Epidemiology. The researchers used data from the National Health and Nutrition Examination Survey 1988-2016 (62,833 respondents) and the National Health Interview Survey 1997-2018 (625,221 respondents), both conducted by the National Center for Health Statistics. To measure physical health, the researchers used eight markers of a condition called metabolic syndrome, a constellation of risk factors for heart disease, stroke, kidney disease and diabetes. Some of the markers include waist circumference, blood pressure, cholesterol level and body mass index (BMI). They also used one marker of chronic inflammation, low urinary albumin, and one additional marker of renal function, creatinine clearance. The researchers found that the measures of physical health have worsened from the Baby Boomer generation through Gen X (born 1965-80) and Gen Y (born 1981-99). For whites, increases in metabolic syndrome were the main culprit, while increases in chronic inflammation were seen most in Black Americans, particularly men. "The declining health trends in recent generations is a shocking finding," Zheng said. "It suggests we may have a challenging health prospect in the United State in coming years." Zheng said it is beyond the scope of the study to comprehensively explain the reasons behind the health decline. But the researchers did check two factors. They found smoking couldn't explain the decline. Obesity could help explain the increase in metabolic syndrome, but not the increases seen in chronic inflammation. It wasn't just the overall health markers that were concerning for some members of the younger generations, Zheng said. Results showed that levels of anxiety and depression have increased for each generation of whites from the War Babies generation (born 1943-45) through Gen Y. While levels of these two mental health indicators did increase for Blacks up through the early Baby Boomers, the rate has been generally flat since then. Health behaviors also show worrying trends. The probability of heavy drinking has continuously increased across generations for whites and Black males, especially after late-Gen X (born 1973-80). For whites and Blacks, the probability of using street drugs peaked at late-Boomers (born 1956-64), decreased afterward, then rose again for late-Gen X. For Hispanics, it has continuously increased since early-Baby Boomers. Surprisingly, results suggest the probability of having ever smoked has continuously increased across generations for all groups. How can this be true with other research showing a decline in overall cigarette consumption since the 1970s? "One possibility is that people in older generations are quitting smoking in larger numbers while younger generations are more likely to start smoking," Zheng said. "But we need further research to see if that is correct." Zheng said these results may be just an early warning of what is to come. "People in Gen X and Gen Y are still relatively young, so we may be underestimating their health problems," he said. "When they get older and chronic diseases become more prevalent, we'll have a better view of their health status." Zheng noted that the United States has already seen recent decreases in life expectancy and increases in disability and morbidity. "Our results suggest that without effective policy interventions, these disturbing trends won't be temporary, but a battle we'll have to continue to fight."
1
Our Endless Appetite for Chocolate Has Bitter Environmental Consequences
Above: A man cuts cocoa pods on a cocoa farm in the village of Andou M’batto, in the region of Alepe in Ivory Coast on Sept. 21. Credit: Thierry Gouegnon for HuffPost The endless global appetite for chocolate has bitter consequences in West Africa, especially in Ivory Coast and Ghana, where most of the world’s cocoa originates. The landscape in the region has been transformed, with expanses of forests razed to make way for cocoa bean plantations, which are eating up protected land and national parks and destroying once-thriving ecosystems. Advertisement Cocoa production, catering especially to a wolfish p for candy in Europe and the U.S. (each American p of chocolate a year; in Switzerland, 19.8 pounds) has led to the decimation of forests. The p have been devastating. Rainfall is down, temperatures have risen and biodiversity has dwindled in one of the most naturally rich forest habitats in the world, home to endangered animals including chimpanzees, elephants and pygmy hippos. Forests once loud with animal sounds are now graveyards. The soil, overused, has lost its fertility. But there is a glimmer of hope for the region. Though globally p , with West Africa seeing some of the worst effects, Ivory Coast and Ghana seem to be rebounding, according to new data. Signs point to the concerted efforts of governments, nongovernmental organizations and multinational corporations to combat the forest loss driven by cocoa cultivation and to promote more sustainable — and more humane — systems of agriculture. Advertisement Ivory Coast, which supplies most of the world’s cocoa (40%) but p of the profit (5%), has been particularly hard hit by deforestation. In less than a century, the country p of its forest cover ― an area about the size of Louisiana ― as poor communities settled in some of its p , clearing trees to farm cocoa. By 2018, approximately 2.5 million hectares of its forest cover was left from 16 million hectares in 1960, according to UN-REDD, a p climate program. Over the course of 2018, Ivory Coast saw further alarming levels of forest loss. The rate rose by 26%, p Figures were even higher in neighboring Ghana, where the forest loss rate jumped by 60% ― higher than any other tropical country in the world (although Ghana’s government p the number). Much of that loss is attributed to cocoa farming. But this year’s figures paint a more hopeful picture, with annual tree cover loss rates slowing in both countries over 2019, according to satellite data obtained by University of Maryland researchers and p by Global Forest Watch in June 2020. The two countries went from the worst performing to the best ― both cutting their rates of tree loss by more than 50% compared with 2018. It’s too early to say for certain what is behind the decline, but the World Resources Institute (WRI), a nonprofit based in Washington, D.C., that leads Global Forest Watch, points to a bundle of anti-deforestation measures implemented over the past few years. Advertisement “The 2019 decline is a welcome signal the initiatives ... and pledges by both countries and major cocoa and chocolate companies to end deforestation could be having an impact,” the WRI p . Although only time will tell if it’s a pattern or a blip ― “we like to wait a few years to see if it’s a sustained trend,” Caroline Winchester, a WRI researcher told HuffPost ― there is, at last, some hope that the forests can be saved. Ghana and Ivory Coast have agreed to a swath of anti-deforestation measures over the years. Some have yielded better results than others. Both are committed to REDD+, a United Nations emissions reduction program that sees resource-rich, p , directly or in carbon credits, for not cutting down or degrading their forests. Ivory Coast also signed the U.N. New York Declaration on Forests in 2014, committing to restore a fifth of lost forest and reduce deforestation by 80% by 2030. In the years since, however, global deforestation has risen and there’s little evidence to suggest that these commitments have led to significant progress on curbing deforestation. Critics chalk that up to p . Advertisement At the end of last year, the Ivory Coast government partnered with startup Seedballs, which focuses on reforestation efforts, p . Big chocolate companies ― whose reputations have been long-soiled for failing to trace and remove illegal cocoa sources (cocoa grown on protected areas or with child labor) ― have also scrambled to act. Manufacturers like Swiss giant Nestlé and the Chicago-based Mars Wrigley, p , as well as cocoa traders, such as U.S. food company Cargill, are committing to programs supporting sustainable production. One of the most prominent is the Cocoa & Forests Initiative, launched in 2017. It’s a public-private partnership between the governments of Ivory Coast and Ghana and 35 cocoa companies, with the aim of ending deforestation and restoring forests. Run by the World Cocoa Foundation (WCF), the initiative is training farmers on “climate-smart” agriculture: planting cocoa under the shade of old trees instead of felling them; diversifying from cocoa to other food crops to increase soil fertility; and modernizing the old, back-breaking cocoa farming methods. The full-sun technique many farmers use requires the removal of surrounding shade trees when planting cocoa, a method that drives deforestation. Meanwhile, planting just a single crop depletes the soil. Advertisement Deli Diomande, a 42-year-old farmer in Mangouin, western Ivory Coast, has farmed cocoa for 10 years. “When I started planting cocoa, the soil was not so degraded, but when the sun comes now, the process is grueling,” he said. Diomande told HuffPost that hotter temperatures kill half of his cocoa trees most seasons. Where farmers once harvested several tons of cocoa beans, they barely harvest a single ton now, he said. Under the initiative, the World Cocoa Foundation is also partnering with governments to map farms in the region so deforestation can be monitored in real time. The partnership p for the first time this year. There was “notable progress measured on agroforestry, traceability, farmer training, policy and collaboration,” said Ethan Budiansky, WCF’s global environment director. “More than four million trees have been planted and one million farms have been mapped to improve traceability and eliminate deforestation from supply chains.” It’s possible that programs like the Cocoa & Forests Initiative “may be starting to have a positive effect because it’s a collective agreement,” said Winchester, of WRI, but she added that it will take years of monitoring to be able to say for sure. Experts warn that deforestation remains rampant despite programs and laws against it and that government action at times has led to mass evictions of families from protected lands. Meanwhile, reforestation efforts are hampered by a lack of government funding and infrastructure. Advertisement One of the more recent and controversial programs in Ivory Coast is a 2018 policy meant to regenerate “classified forests” (supposedly protected areas) that had been illegally turned into cocoa farms by handing them to big chocolate manufacturers. The idea is that because the land is already degraded, it’s better to put it into companies’ hands to run sustainably as legal agro-forestry reserves. Industry experts are skeptical. “How would chocolate companies who have contributed to deforestation be able to manage forests in a sustainable manner? I just don’t see it happening,” said Michiel Hendriksz, director of p , a Swiss nonprofit working to simplify sustainable farming methods in the region. Rights groups have also kicked against the policy, which they say will displace hundreds of thousands of people living there and amount to human rights abuses. Despite the falling rates of deforestation reported this year, campaigners told HuffPost that the situation on the ground is still bad and will remain so as long as governments and companies fail to address one of the biggest drivers of deforestation in the region: biting poverty. The booming chocolate industry is p , but the largest profits are concentrated in manufacturing and distribution businesses based in the U.S. and Europe. The trade is far from lucrative for cocoa farmers. The bitter irony is that many would never be able to afford the chocolate bars their beans produce. Advertisement “I’ve worked in war zones like Sierra Leone and Iraq, and I’ve never seen worse poverty,” Etelle Higonnet, a researcher with U.S. environmental monitoring group Mighty Earth, told HuffPost. “If you don’t pay people a living income, how are they supposed to send their kids to school? Of course they chop down trees.” Higonnet said that, although deforestation appears to be slowing, any successes recorded may backslide if farmers continue to earn poorly — pressuring them to clear even more land and plant even more cocoa — and if supply chains remain untraceable. Cocoa farmers, mostly rural, small-scale producers, are at the very bottom of the global chocolate supply chain. Middlemen, who buy the cocoa off them and then sell the beans to big traders such as Cargill (which, in turn, sell to manufacturers like Mars), make a better profit. This layered supply chain can make the origins of cocoa hard to trace, and some of these middlemen, Higonnet said, are the source of the “worst cocoa with trafficking and slavery, child labor, and deforestation.” Approximately 890,000 child laborers work in cocoa farming, according to the 2018 p According to Ousmane Attai, a journalist and cocoa specialist in Ivory Coast, “the situation is stagnant” when it comes to better rights for farmers. He blames Ivory Coast’s ambitions to remain the leading producer of cocoa rather than to grow food crops for local consumption, which would earn farmers more income and give them fewer incentives to clear forests for cocoa farms. Advertisement There are fears that the pandemic may cause an uptick in deforestation rates as monitoring pauses. With schools shut between March and May, child labor numbers have likely increased, too, said a cocoa expert working in Ivory Coast who preferred to stay anonymous. Ivory Coast and Ghana have tried to raise farmer income and consequently reduce deforestation. In 2019, the two countries p together to set standard prices for cocoa at a minimum of $2,600 per ton with an additional “living income” of $400 per ton for farmers. The extra fees — to be enforced for the 2020-21 growing season — will go into a “stabilization fund” intended to protect farmers from cocoa price fluctuations. But a relatively small price uplift won’t be enough to fight the poverty that’s linked so closely with deforestation. Government corruption and inaction must also be addressed, campaigners say. Higonnet said little has been done by way of prosecutions despite extensive land mapping exercises that identify deforestation hot spots stemming from farming or illegal felling. The Ghana Cocoa Board, the government agency that fixes prices, and Ivory Coast’s Cocoa and Coffee Council marketing board could not be reached for comment. Advertisement Environmentalists and big chocolate companies still sound hopeful notes. As awareness increases of the environmental and human rights abuses happening in West Africa to make chocolate, companies are setting goals on transparency ― with some performing better than others, according to a 2019 scorecard by the nonprofit Green America. Olam Cocoa, a leading cocoa supplier to big chocolate companies, this week announced that it recorded “100% traceability” in its global supply chain ― meaning it can track cocoa beans back to their farm of origin. Nestlé says it has mapped 75% of its cocoa back to the farm and hopes to reach 100% this year. It also said it is seeing some progress in monitoring and disengaging p in its cocoa sources. Mars Wrigley told HuffPost that its p will soon be upgraded to a full supply report and that it aims to map and trace 100% of its cocoa from Ivory Coast and Ghana down to farms by 2022. Mondelez, the Illinois-based snack giant behind Oreos and Toblerone, says it has mapped 85% of the farms it gets its cocoa from in Ghana, Ivory Coast and the Dominican Republic and says it is halfway to its 2025 aim to work only with farmers registered under its in-house sustainability program. Companies intensifying cocoa source tracing will move the needle on progress, Higonnet said. The fear of sanctions and of having their cocoa rejected could motivate more farmers to support deforestation-free farming. Advertisement Battling deforestation is “difficult and complex,” said Budiansky, the WCF environment director. He said the organization has p farm mapping progress to monitor more deforestation hot spots. Finding a lasting solution to deforestation, Higonnet said, requires all hands on board: governments to enforce laws better, big chocolate manufacturers to ramp up farmer education on zero deforestation and remove bad cocoa sources, and you, the consumer, to support sustainable manufacturers by paying a bit more for your chocolate bar. “In a way, we’ve made a lot of progress, but I feel like the progress doesn’t matter so much until you go all the way,” Higonnet said. “We’re so far from the finish line, which is the cocoa companies and the government cracking down on deforestation. Everyone in the supply chain should be responsible. If you could spend three cents more so there’s no deforestation and child labor, you should do that. You should pay that extra money to do the right thing.” HuffPost’s “ Work in Progress ” series focuses on the impact of business on society and the environment and is funded by Porticus . It is part of the “ This New World ” series. All content is editorially independent, with no influence or input from Porticus. If you have an idea or tip for the editorial series, send an email to thisnewworld@huffpost.com .
2
The Rise of DOS: How Microsoft Got the IBM PC OS Contract
Perhaps the most controversial aspect of the original IBM PC is how Microsoft ended up with the contract for the operating system. This would eventually make Microsoft's MS-DOS the standard and set the stage for Microsoft to become the world's leading PC software company. As usual, there are lots of conflicting reports about the details that made this happen. But it mostly seems to be a case of Bill Gates and his company seeing the right opportunity at the right time and then executing well on the concept. In the early PC market, Microsoft had established itself as the largest producer of computer programming languages, notably with an interpreted version of the BASIC language that had become the default standard on just about every major PC to date. Meanwhile, Digital Research Inc. (DRI) had become the leading operating system vendor with its CP/M (Control Program for Microcomputers)p operating system. It was designed for the Intel 8080 processor (and later used on the Zilog Z80) and used on machines such as the Osborne 1, Kaypro II, and even the Apple II, using a Z80 "Softcard" from Microsoft. In July 1980, before the IBM PC business unit known as "Project Chess" was formally approved, IBM sent a team led by Jack Sams, who would become the director of software development, to meet with Microsoft to discuss the PC market. The talks seem to have been in general terms, with IBM not disclosing many details of the actual PC. After the project was approved on Aug. 21, 1980, Sams and his colleagues went back to Microsoft and discussed licensing Microsoft's languages for the IBM PC, including BASIC but also COBOL, FORTRAN, and Pascal. Microsoft had already been working on 8086-based languages for other companies, so it seemed a logical fit. In just about every account of the meeting, IBM asked Microsoft about operating systems, and Bill Gates referred IBM to Digital Research, even getting DRI founder Gary Kildallp on the phone to arrange a meeting for the following day. There are many somewhat conflicting stories about what happened when IBM went to meet with DRI. Gates is quoted in  Fire in the Valley: The Making of The Personal Computerp by Paul Freiberger and Michael Swain as saying "Gary was out flying" that day, but Kildall always denied the implication, telling the authors of  Hard Drive: Bill Gates and the Making of the Microsoft Empirep that he had flown on a business trip to the Bay Area. IBM and its lawyers met with Kildall's wife, Dorothy McEwen, and presented DRI with a one-sided non-disclosure agreement, which the company refused to sign. Later, Sams would say in Hard Drive that IBM couldn't get Kildall to agree to spend the money to develop a 16-bit version of CP/M in the tight schedule IBM required. Whatever the reason, it's clear that IBM left Digital Research without coming to an agreement on an operating system. IBM communicated its problem to Microsoft later that month, and Microsoft's Gates, Paul Allen, and Kay Niship apparently debated what to do about the program. Allen knew of an alternative: Tim Patersonp of Seattle Computer Products (SCP)p had earlier built an 8086-based prototype computer, and while he was waiting for CP/M to be ported to the 8086, he created a rough 16-bit operating system for it. Paterson called it QDOS for Quick and Dirty Operating System, and according to Allen, it all fit within 6K. (It would later be renamed 86-DOS, and sometimes referred to as SCP-DOS.) By most accounts, Nishi was the one most strongly in favor of Microsoft getting into the operating system world. Allen said in his autobiography Idea Man p that Gates was less enthusiastic. Allen called Seattle Computer Products owner Rod Brock and licensed QDOS for $10,000 plus a royalty of $15,000 for every company that licensed the software. In  Big Blues: The Unmaking of IBMp , Sams is quoted as saying Gates told him about QDOS and offered it to IBM. "The question was: Do you want to buy it or do you want me to buy it?" Sams said. Since IBM had already had decided to go with an open architecture, the company wanted Microsoft to purchase QDOS. Besides, Sams said, "If we'd bought the software, we'd have just screwed it up." Gates, Steve Ballmer, and Microsoft's Bob O'Rear met with IBM in Boca Raton and agreed that Microsoft would coordinate the software development process for the PC. According to Allen, under the contract signed that November, IBM agreed to pay Microsoft a total of $430,000, including $45,000 for what would end up being called DOS, $310,000 for the various 16-bit languages, and $75,000 for "adaptions, testing and consultation." What's notable about this is that IBM apparently was expecting Microsoft to ask for more money upfront or at least for a per-copy royalty. Instead, Microsoft wanted the ability to sell DOS to other companies. Indeed, they would soon realize that under the name MS-DOS, the new operating system would be crucial to the success of Microsoft. In May 1981, Paterson left SCP and joined Microsoft. On July 27, 1981, Allen and Brock signed a contract selling DOS to Microsoft for $50,000 plus favorable terms on upgrades of the languages. According to Big Blues, Don Estridge, who headed the IBM PC project, said one reason the company went to Microsoft in the first place was that Microsoft's BASIC had hundreds of thousands of users, while IBM's BASIC, while excellent, had few users. According to Fire in the Valley, he also reportedly told Gates that when IBM CEO John Opel heard Microsoft would get the contract, he said "Oh, is that Mary Gates' boy's company?" Opel and Bill Gates' mother served together on the national board of the United Way. Still, the controversy over DOS and CP/M continued. For years, Kildall and DRI would claim that Paterson's QDOS just copied CP/M. (Back then, software could not be patented, though it could be copyrighted.) In Big Blues, Kildall was adamant that a lot of QDOS was stolen: "Ask Bill [Gates] why function code 6 [in QDOS and later in MS-DOS] ends in a dollar sign. No one in the world knows that but me." But Tim Paterson always denies it. He told the authors of Hard Drive, "At the time, I told [Kildall] I didn't copy anything. I just took his printed documentation and did something that did the same thing. That's not by any stretch violating any kind of intellectual property laws. Making the recipe in the book does not violate the copyright on the recipe." Paterson said his goal was to make it as easy as possible for software developers to port their 8080 programs to the new OS, so he used Intel's manual for translating 8080 instructions into 8086 ones. He then got Digital's CP/M manual, and for each function, he wrote a corresponding 8086 function. "Once you translated these programs, my operating system would take the CP/M function after translation and it would respond in the same way," said Paterson. "To do this did not require ever having CP/M. It only required taking Digital's manual and writing my operating system. That's exactly what I did. I never looked at Kildall's code, just his manual." Big Blues said Kildall considered suing IBM and Microsoft over DOS, but IBM mollified the company by offering to make the 16-bit version of CP/M also available on the PC. Indeed, when it came out, the IBM PC could run three operating systems: DOS, CP/M, and the UCSD P-systemp . But CP/M was priced at $240 versus $40 for DOS (likely because of the non-royalty terms of the Microsoft contract), and it was clear that IBM was intent on pushing DOS. Thanks to the non-exclusive agreement, Microsoft then had the rights to sell DOS for other machines—and that, in turn, set the stage for Microsoft to dominate the PC operating system industry for decades. For more, check out PCMag's full coverage of the 40th anniversary of the IBM PC: Get Our Best Stories! Thanks for signing up! Your subscription has been confirmed. Keep an eye on your inbox! Sign up for other newsletters
6
Hackers leak passwords for 500k Fortinet VPN accounts
September 8, 2021 03:03 PM A threat actor has leaked a list of almost 500,000 Fortinet VPN login names and passwords that were allegedly scraped from exploitable devices last summer. While the threat actor states that the exploited Fortinet vulnerability has since been patched, they claim that many VPN credentials are still valid. This leak is a serious incident as the VPN credentials could allow threat actors to access a network to perform data exfiltration, install malware, and perform ransomware attacks. The list of Fortinet credentials was leaked for free by a threat actor known as 'Orange,' who is the administrator of the newly launched RAMP hacking forum and a previous operator of the Babuk Ransomware operation. After disputes occurred between members of the Babuk gang, Orange split off to start RAMP and is now believed to be a representative of the new Groove ransomware operation. Yesterday, the threat actor created a post on the RAMP forum with a link to a file that allegedly contains thousands of Fortinet VPN accounts. Post on the RAMP hacking forum At the same time, a post appeared on Groove ransomware's data leak site also promoting the Fortinet VPN leak. Post about the Fortinet leak on the Groove data leak site Both posts lead to a file hosted on a Tor storage server used by the Groove gang to host stolen files leaked to pressure ransomware victims to pay. BleepingComputer's analysis of this file shows that it contains VPN credentials for 498,908 users over 12,856 devices. While we did not test if any of the leaked credentials were valid, BleepingComputer can confirm that all of the IP address we checked are Fortinet VPN servers. Further analysis conducted by Advanced Intel shows that the IP addresses are for devices worldwide, with 2,959 devices located in the USA. Geographic distribution of leaked Fortinet servers Kremez told BleepingComputer that the now-patched Fortinet CVE-2018-13379 vulnerability was exploited to gather these credentials. A source in the cybersecurity industry told BleepingComputer that they were able to legally verify that at least some of the leaked credentials were valid. However some sources are giving mixed answers, with some saying many credentials work, while others state that most do not. It is unclear why the threat actor released the credentials rather than using them for themselves, but it is believed to have been done to promote the RAMP hacking forum and the Groove ransomware-as-a-service operation. "We believe with high confidence the VPN SSL leak was likely accomplished to promote the new RAMP ransomware forum offering a "freebie" for wannabe ransomware operators." Advanced Intel CTO Vitali Kremez told BleepingComputer. Groove is a relatively new ransomware operation that only has one victim currently listed on their data leak site. However, by offering freebies to the cybercriminal community, they may be hoping to recruit other threat actors to their affiliate system. While BleepingComputer cannot legally verify the list of credentials, if you are an administrator of Fortinet VPN servers, you should assume that many of the listed credentials are valid and take precautions. These precautions include performing a forced reset of all user passwords to be safe and to check your logs for possible intrusions. If you have Fortinet VPN, please go force reset all your user’s passwords. Also, it’s probably not a bad idea to check logs and potentially spin up an IR or two — pancak3 (@pancak3lullz) September 7, 2021 If anything looks suspicious, you should immediately make sure that you have the latest patches installed, perform a more thorough investigation, and make sure that your user's passwords are reset. To check if a device is part of the leak, security researcher Cypher has created a list of the leaked device's IP addressees. While Fortinet never responded to our emails about the leak, after we emailed them about the incident they published an advisory confirming our reporting that the leak was related to the CVE-2018-13379 vulnerability. "This incident is related to an old vulnerability resolved in May 2019. At that time, Fortinet issued a PSIRT advisory and communicated directly with customers. And because customer security is our top priority, Fortinet subsequently issued multiple corporate blog posts detailing this issue, strongly encouraging customers to upgrade affected devices. In addition to advisories, bulletins, and direct communications, these blogs were published in August 2019, July 2020,  April 2021, and again in June 2021." - Fortinet. Update 9/9/21: Added Fortinet's statement, mixed information about the validity of the credentials, and link to list of leaked device IP addresses. Related Articles: Harvard Pilgrim Health Care ransomware attack hits 2.5 million people MCNA Dental data breach impacts 8.9 million people after ransomware attack BlackByte ransomware claims City of Augusta cyberattack Arms maker Rheinmetall confirms BlackBasta ransomware attack Ransomware gang steals data of 5.8 million PharMerica patients
26
PyTorch 1.8
We are excited to announce the availability of PyTorch 1.8. This release is composed of more than 3,000 commits since 1.7. It includes major updates and new features for compilation, code optimization, frontend APIs for scientific computing, and AMD ROCm support through binaries that are available via pytorch.org. It also provides improved features for large-scale training for pipeline and model parallelism, and gradient compression. A few of the highlights include: Along with 1.8, we are also releasing major updates to PyTorch libraries including TorchCSPRNG, TorchVision, TorchText and TorchAudio. For more on the library releases, see the post here. As previously noted, features in PyTorch releases are classified as Stable, Beta and Prototype. You can learn more about the definitions in the post here. The PyTorch 1.8 release brings a host of new and updated API surfaces ranging from additional APIs for NumPy compatibility, also support for ways to improve and scale your code for performance at both inference and training time. Here is a brief summary of the major features coming in this release: As part of PyTorch’s goal to support scientific computing, we have invested in improving our FFT support and with PyTorch 1.8, we are releasing the torch.fft module. This module implements the same functions as NumPy’s np.fft module, but with support for hardware acceleration and autograd. The torch.linalg module, modeled after NumPy’s np.linalg module, brings NumPy-style support for common linear algebra operations including Cholesky decompositions, determinants, eigenvalues and many others. FX allows you to write transformations of the form transform(input_module : nn.Module) -> nn.Module, where you can feed in a Module instance and get a transformed Module instance out of it. This kind of functionality is applicable in many scenarios. For example, the FX-based Graph Mode Quantization product is releasing as a prototype contemporaneously with FX. Graph Mode Quantization automates the process of quantizing a neural net and does so by leveraging FX’s program capture, analysis and transformation facilities. We are also developing many other transformation products with FX and we are excited to share this powerful toolkit with the community. Because FX transforms consume and produce nn.Module instances, they can be used within many existing PyTorch workflows. This includes workflows that, for example, train in Python then deploy via TorchScript. You can read more about FX in the official documentation. You can also find several examples of program transformations implemented using torch.fx here. We are constantly improving FX and invite you to share any feedback you have about the toolkit on the forums or issue tracker. We’d like to acknowledge TorchScript tracing, Apache MXNet hybridize, and more recently JAX as influences for program acquisition via tracing. We’d also like to acknowledge Caffe2, JAX, and TensorFlow as inspiration for the value of simple, directed dataflow graph program representations and transformations over those representations. The PyTorch 1.8 release added a number of new features as well as improvements to reliability and usability. Concretely, support for: Stable level async error/timeout handling was added to improve NCCL reliability; and stable support for RPC based profiling. Additionally, we have added support for pipeline parallelism as well as gradient compression through the use of communication hooks in DDP. Details are below: As machine learning models continue to grow in size, traditional Distributed DataParallel (DDP) training no longer scales as these models don’t fit on a single GPU device. The new pipeline parallelism feature provides an easy to use PyTorch API to leverage pipeline parallelism as part of your training loop. The DDP communication hook is a generic interface to control how to communicate gradients across workers by overriding the vanilla allreduce in DistributedDataParallel. A few built-in communication hooks are provided including PowerSGD, and users can easily apply any of these hooks to optimize communication. Additionally, the communication hook interface can also support user-defined communication strategies for more advanced use cases. In addition to the major stable and beta distributed training features in this release, we also have a number of prototype features available in our nightlies to try out and provide feedback. We have linked in the draft docs below for reference: Support for PyTorch Mobile is expanding with a new set of tutorials to help new users launch models on-device quicker and give existing users a tool to get more out of our framework. These include: Our new demo apps also include examples of image segmentation, object detection, neural machine translation, question answering, and vision transformers. They are available on both iOS and Android: In addition to performance improvements on CPU for MobileNetV3 and other models, we also revamped our Android GPU backend prototype for broader models coverage and faster inferencing: Lastly, we are launching the PyTorch Mobile Lite Interpreter as a prototype feature in this release. The Lite Interpreter allows users to reduce the runtime binary size. Please try these out and send us your feedback on the PyTorch Forums. All our latest updates can be found on the PyTorch Mobile page PyTorch Lite Interpreter is a streamlined version of the PyTorch runtime that can execute PyTorch programs in resource constrained devices, with reduced binary size footprint. This prototype feature reduces binary sizes by up to 70% compared to the current on-device runtime in the current release. In 1.8, we are releasing the support for benchmark utils to enable users to better monitor performance. We are also opening up a new automated quantization API. See the details below: Benchmark utils allows users to take accurate performance measurements, and provides composable tools to help with both benchmark formulation and post processing. This expected to be helpful for contributors to PyTorch to quickly understand how their contributions are impacting PyTorch performance. from torch.utils.benchmark import Timer results = [] for num_threads in [ 1 , 2 , 4 ]: timer = Timer ( stmt = "torch.add(x, y, out=out)" , setup = """ n = 1024 x = torch.ones((n, n)) y = torch.ones((n, 1)) out = torch.empty((n, n)) """ , num_threads = num_threads , ) results . append ( timer . blocked_autorange ( min_run_time = 5 )) print ( f " { num_threads } thread { 's' if num_threads > 1 else ' ' : < 4 } " f " { results [ - 1 ]. median * 1e6 : > 4.0 f } us " + ( f "( { results [ 0 ]. median / results [ - 1 ]. median :. 1 f } x)" if num_threads > 1 else '' ) ) 1 thread 376 us 2 threads 189 us ( 2.0 x ) 4 threads 99 us ( 3.8 x ) FX Graph Mode Quantization is the new automated quantization API in PyTorch. It improves upon Eager Mode Quantization by adding support for functionals and automating the quantization process, although people might need to refactor the model to make the model compatible with FX Graph Mode Quantization (symbolically traceable with torch.fx). In PyTorch 1.8, you can now create new out-of-tree devices that live outside the pytorch/pytorch repo. The tutorial linked below shows how to register your device and keep it in sync with native PyTorch devices. Starting in PyTorch 1.8, we have added support for ROCm wheels providing an easy onboarding to using AMD GPUs. You can simply go to the standard PyTorch installation selector and choose ROCm as an installation option and execute the provided command. Thanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the discussion forums and open GitHub issues. Team PyTorch
450
No one who got Moderna's vaccine in trial developed severe Covid-19
's COVID-19 reporting is supported by the Pulitzer Center and the Heising-Simons Foundation. Continuing the spate of stunning news about COVID-19 vaccines, the biotech company Moderna announced the final results of the 30,000-person efficacy trial for its candidate in a press release today: Only 11 people who received two doses of the vaccine developed COVID-19 symptoms after being infected with the pandemic coronavirus, versus 185 symptomatic cases in a placebo group. That is an efficacy of 94.1%, the company says, far above what many vaccine scientists were expecting just a few weeks ago. More impressive still, Moderna's candidate had 100% efficacy against severe disease. There were zero such COVID-19 cases among those vaccinated, but 30 in the placebo group, including one death from the disease. There were zero such COVID-19 cases among those vaccinated, but 30 in the placebo group. The company today plans to file a request for emergency use authorization (EUA) for its vaccine with the U.S. Food and Drug Administration (FDA), and is also seeking a similar green light from the European Medicines Agency. The data released today bolster an interim report from the company two weeks ago that only analyzed 95 total cases but produced similarly impressive efficacy. "I would still like to see all of the actual data, but what we've seen so far is absolutely remarkable," says Paul Offit, a vaccine researcher at the Children's Hospital of Philadelphia who is a member of an independent committee of vaccine experts that advises FDA. Moderna's vaccine against SARS-CoV-2, the virus that causes COVID-19, relies on a novel technology that uses messenger RNA (mRNA) to code for a protein called spike that studs the surface of the pathogen. Pfizer and BioNTech have developed a similar mRNA vaccine against COVID-19 and also reported excellent results, with an efficacy of 95%, in the final analysis of their 45,000-person trial. In that study, which ended after 170 cases of COVID-19 were identified, only 10 severe cases occurred, and just one was in the vaccinated group. Moderna and the Pfizer/BioNTech collaboration say their vaccines worked to about the same degree in all different groups, ethnicities, and genders. (More than 7000 participants were over age 65 and more than 5000 were under 65 but had diseases putting them at a higher risk of severe COVID-19; the study also included more than 11,000 people from communities of color.) That equal success is vital information for bodies trying to prioritize the use of the new vaccines, such as an advisory panel to the Centers for Disease Control and Prevention (CDC) that is meeting tomorrow. The committee's recommendations influence CDC's decisions about vaccine prioritization, but individual states come up with their own guidelines. Moderna received $1 billion from the U.S. government's Operation Warp Speed to help develop its mRNA vaccine. (Pfizer passed on such development money, but has signed an advanced purchase order for its vaccine with Warp Speed.) Moderna CEO Stéphane Bancel says all of the federal money went toward staging the clinical trials, and that without it, progress surely would have been delayed. Investors in May contributed another $1.3 billion to help the young company, which has no products on the market, build facilities to produce its vaccine. Pfizer filed an EUA request for its vaccine last week, which led FDA to announce it will convene a meeting of its vaccine advisory committee to discuss the data in depth on 10 December. Bancel says FDA has told the company it might convene the committee again as early as 17 December to review its EUA application. He says the agency could issue an EUA 24 to 72 hours later. Bancel imagines the Moderna vaccine, given its high efficacy against both mild and severe disease, will have the most impact if given to people at the greatest risk from SARS-CoV-2. "Give it to health care workers, give it to the elderly, give it to people with diabetes, overweight, heart disease," he says. "A 25-year-old healthy man? Give him another vaccine." Moderna plans to charge $32 to $37 per dose of the vaccine in developed countries, Bancel says, but will have cheaper pricing for other parts of the world. The company is negotiating with the COVID-19 Vaccines Global Access Facility, a nonprofit that aims to reduce global vaccine inequities by purchasing and distributing approved products. "We want to have this vaccine available at a tiered price for low-income countries," he says. Bancel stresses that he wants other COVID-19 vaccines to succeed as well. "The world needs several manufacturers to make it to the finish line to stop this awful pandemic," he says. U.K. pharma giant AstraZeneca, in partnership with the University of Oxford, has reported preliminary evidence of efficacy for its COVID-19 vaccine, as has the Gamaleya Research Institute of Epidemiology and Microbiology in Russia. Moderna hopes to provide the U.S. government with 20 million doses by the end of the year, and Pfizer says it should have 50 million doses to split between the United States and other countries that made advanced purchase agreements.
6
Modeling Bitcoin Value with Scarcity
PlanB p Follow 9 min read p Mar 22, 2019 -- 113 Listen Share Satoshi Nakamoto published the bitcoin white paper 31/Oct 2008 [1], created the bitcoin genesis block 03/Jan 2009, and released the bitcoin code 08/Jan 2009. So begins a journey that leads to a $70bn bitcoin (BTC) market today. Bitcoin is the first scarce digital object the world has ever seen. It is scarce like silver & gold, and can be sent over the internet, radio, satellite etc. "As a thought experiment, imagine there was a base metal as scarce as gold but with the following properties: boring grey in colour, not a good conductor of electricity, not particularly strong [..], not useful for any practical or ornamental purpose .. and one special, magical property: can be transported over a communications channel" — Nakamoto [2] Surely this digital scarcity has value. But how much? In this article I quantify scarcity using stock-to-flow, and use stock-to-flow to model bitcoin’s value. Dictionaries usually define scarcity as 'a situation in which something is not easy to find or get', and 'a lack of something'. Nick Szabo has a more useful definition of scarcity: 'unforgeable costliness'. "What do antiques, time, and gold have in common? They are costly, due either to their original cost or the improbability of their history, and it is difficult to spoof this costliness. [..] There are some problems involved with implementing unforgeable costliness on a computer. If such problems can be overcome, we can achieve bit gold." — Szabo [3] "Precious metals and collectibles have an unforgeable scarcity due to the costliness of their creation. This once provided money the value of which was largely independent of any trusted third party. [..][but] you can’t pay online with metal. Thus, it would be very nice if there were a protocol whereby unforgeably costly bits could be created online with minimal dependence on trusted third parties, and then securely stored, transferred, and assayed with similar minimal trust. Bit gold." — Szabo [4] Bitcoin has unforgeable costliness, because it costs a lot of electricity to produce new bitcoins. Producing bitcoins cannot be easily faked. Note that this is different for fiat money and also for altcoins that have no supply cap, have no proof-of-work (PoW), have low hashrate, or have a small group of people or companies that can easily influence supply etc. Saifedean Ammous talks about scarcity in terms of stock-to-flow (SF) ratio. He explains why gold and bitcoin are different from consumable commodities like copper, zinc, nickel, brass, because they have high SF. "For any consumable commodity [..] doubling of output will dwarf any existing stockpiles, bringing the price crashing down and hurting the holders. For gold, a price spike that causes a doubling of annual production will be insignificant, increasing stockpiles by 3% rather than 1.5%." "It is this consistently low rate of supply of gold that is the fundamental reason it has maintained its monetary role throughout human history." "The high stock-to-flow ratio of gold makes it the commodity with the lowest price elasticity of supply." "The existing stockpiles of Bitcoin in 2017 were around 25 times larger than the new coins produced in 2017. This is still less than half of the ratio for gold, but around the year 2022, Bitcoin's stock-to-flow ratio will overtake that of gold" — Ammous[5] So, scarcity can be quantified by SF. SF = stock / flow Stock is the size of the existing stockpiles or reserves. Flow is the yearly production. Instead of SF, people also use supply growth rate (flow/stock). Note that SF = 1 / supply growth rate. Let’s look at some SF numbers. Gold has the highest SF 62, it takes 62 years of production to get current gold stock. Silver is second with SF 22. This high SF makes them monetary goods. Palladium, platinum and all other commodities have SF barely higher than 1. Existing stock is usually equal or lower than yearly production, making production a very important factor. It is almost impossible for commodities to get a higher SF, because as soon as somebody hoards them, price rises, production rises, and price falls again. It is very hard to escape this trap. Bitcoin currently has a stock of 17.5m coins and supply of 0.7m/yr = SF 25. This places bitcoin in the monetary goods category like silver and gold. Bitcoin's market value at current prices is $70bn. Supply of bitcoin is fixed. New bitcoins are created in every new block. Blocks are created every 10 minutes (on average), when a miner finds the hash that satisfies the PoW required for a valid block. The first transaction in each block, called the coinbase, contains the block reward for the miner that found the block. The block reward consists of the fees that people pay for transactions in that block and the newly created coins (called subsidy). The subsidy started at 50 bitcoins, and is halved every 210,000 blocks (about 4 years). That's why 'halvings' are very important for bitcoins money supply and SF. Halvings also cause the supply growth rate (in bitcoin context usually called 'monetary inflation') to be stepped and not smooth. The hypothesis in this study is that scarcity, as measured by SF, directly drives value. A look at the table above confirms that market values tend to be higher when SF is higher. Next step is to collect data and make a statistical model. I calculated bitcoin's monthly SF and value from Dec 2009 to Feb 2019 (111 data points in total). Number of blocks per month can be directly queried from the bitcoin blockchain with Python/RPC/bitcoind. Actual number of blocks differs quite a bit from the theoretical number, because blocks are not produced exactly every 10 minutes (e.g. in the first year 2009 there were significantly less blocks). With the number of blocks per month and known block subsidy, you can calculate flow and stock. I corrected for lost coins by arbitrarily disregarding the first million coins (7 months) in the SF calculation. More accurate adjusting for lost coins will be a subject for future research. Bitcoin price data is available from different sources but starts at Jul 2010. I added the first known bitcoin prices (1$ for 1309 BTC Oct 2009, first quote of $0.003 on BitcoinMarket Mar 2010, 2 pizza's worth $41 for 10,000 BTC May 2010) and interpolated. Data archeology will be a subject for future research. We already have the data points for gold (SF 62, market value $8.5trn) and silver (SF 22, market value $308bn), which I use as a benchmark. A first scatter plot of SF vs market value shows that it is better to use logarithmic values or axis for market value, because it spans 8 orders of magnitude (from $10,000 to $100bn). Using logarithmic values or axis for SF as well reveals a nice linear relationship between ln(SF) and ln(market value). Note that I use natural logarithm (ln with base e) and not common logarithm (log with base 10), which would yield similar results. Fitting a linear regression to the data confirms what can be seen with the naked eye: a statistically significant relationship between SF and market value (95% R2, significance of F 2.3E-17, p-Value of slope 2.3E-17). The likelihood that the relationship between SF and market value is caused by chance is close to zero. Of course other factors also impact price, regulation, hacks and other news, that is why R2 is not 100% (and not all dots are on the straight black line). However, the dominant driving factor seems to be scarcity / SF. What is very interesting is that gold and silver, which are totally different markets, are in line with the bitcoin model values for SF. This gives extra confidence in the model. Note that at the peak of the bull market in Dec 2017 bitcoin SF was 22 and bitcoin market value was $230bn, very close to silver. Because halvings have such a big impact on SF, I put months until the next halving as a color overlay in the chart. Dark blue is the halving month, and red is just after the halving. Next halving is May 2020. Current SF of 25 will double to 50, very close to gold (SF 62). The predicted market value for bitcoin after May 2020 halving is $1trn, which translates in a bitcoin price of $55,000. That is quite spectacular. I guess time will tell and we will probably know one or two years after the halving, in 2020 or 2021. A great out of sample test of this hypothesis and model. People ask me where all the money needed for $1trn bitcoin market value would come from? My answer: silver, gold, countries with negative interest rate (Europe, Japan, US soon), countries with predatory governments (Venezuela, China, Iran, Turkey etc), billionaires and millionaires hedging against quantitative easing (QE), and institutional investors discovering the best performing asset of last 10 yrs. We can also model bitcoin price directly with SF. The formula of course has different parameters, but the result is the same, 95% R2 and a predicted bitcoin price of $55,000 with SF 50 after May 2020 halving. I plotted bitcoin model price based on SF (black) and actual bitcoin price over time, with the number of blocks as color overlay. Note the goodness of fit, especially the almost immediate price adjustment after Nov 2012 halving. Adjustment after Jun 2016 halving was much slower, possibly due to Ethereum competition and the DAO hack. Also, you see less blocks per month (blue) in the first year 2009 and during downward difficulty adjustments end2011, mid2015 and end2018. Introduction of GPU miners in 2010-2011 and ASIC miners in 2013 resulted in more blocks per month (red). Also very interesting is that there is indication of a power law relationship. The linear regression function: ln(market value) = 3.3 * ln(SF)+14.6 .. can be written as a power law function: market value = exp(14.6) * SF ^ 3.3 The possibility of a power law with 95% R2 over 8 orders of magnitude, adds confidence that the main driver of bitcoin value is correctly captured with SF. A power law is a relationship in which a relative change in one quantity gives rise to a proportional relative change in the other quantity, independent of the initial size of those quantities. [6]. Every halving, bitcoin SF doubles and market value increases 10x, this is a constant factor. Power laws are interesting because they reveal an underlying regularity in the properties of seemingly random complex systems. Complex systems usually have properties where changes between phenomena at different scales are independent of the scales we are looking at. This self-similar property underlies power law relationships . We see this in Bitcoin too: 2011, 2014 and 2018 crashes look very similar (all have -80% dips) but on totally different scales (resp. $10, $1000, $10,000); if you don't use log scales, you will not see it. Scale in-variance and self-similarity has a link with fractals. In fact, parameter 3.3 in the power law function above is the 'fractal dimension'. For more information on fractals see the famous length of coastlines study [7]. Bitcoin is the first scarce digital object the world has ever seen, it is scarce like silver & gold, and can be sent over the internet, radio, satellite etc. Surely this digital scarcity has value. But how much? In this article I quantify scarcity using stock-to-flow, and use stock-to-flow to model bitcoin’s value. A statistically significant relationship between stock-to-flow and market value exists. The likelihood that the relationship between stock-to-flow and market value is caused by chance is close to zero. Adding confidence in the model: The model predicts a bitcoin market value of $1trn after next halving in May 2020, which translates in a bitcoin price of $55,000. [1] https://bitcoin.org/bitcoin.pdf — Satoshi Nakamoto, 2008 [2] https://bitcointalk.org/index.php?topic=583.msg11405#msg11405 — Satoshi Nakamoto, 2010 [3] https://unenumerated.blogspot.com/2005/10/antiques-time-gold-and-bit-gold.html — Nick Szabo, 2008 [4] https://unenumerated.blogspot.com/2005/12/bit-gold.html — Nick Szabo, 2008 [5] The Bitcoin Standard: The Decentralized Alternative to Central Banking — Saifedean Ammous, 2018 [6] https://necsi.edu/power-law [7] http://fractalfoundation.org/OFC/OFC-10-4.html
171
Scribe – An alternative front-end to Medium
Here's an example Custom domains work too. See the FAQ for more information. How-To To view a Medium post simply replace medium.com with scribe.rip If the URL is: medium.com/@user/my-post-09a6af907a2 change it to scribe.rip/@user/my-post-09a6af907a2 How-To Automatically Check out the FAQ
2
A Quarter Century of Hype – 25 Years of the Gartner Hype Cycle
× Watch in our app
7
Successor Ideology
b is a term coined by essayist Wesley Yang to describe an emergent ideology within left-wing political movements in the United States centered around intersectionality, social justice, identity politics, and anti-racism, which is supposedly replacing conventional liberal values of pluralism, freedom of speech, color blindness, and free inquiry. [1] [2] [3] Proponents of the concept link it to an alleged growth in the intolerance of differing opinions, to cancel culture, "wokeness," "social justice warriors", and to the far left; [4] [5] [6] Yang himself describes it bluntly as "authoritarian Utopianism that masquerades as liberal humanism while usurping it from within." [4] The thesis has garnered support from some commentators, with Roger Berkowitz linking it to a broader retreat of liberalism worldwide—challenged from the left in the form of the successor ideology and from the right in the form of illiberal democracy [7] —and with Matt Taibbi calling the ideas of those he associates with the ideology "toxic" and "unattractive", [8] [3] for instance. The concept, however, has also come under criticism, with some commentators arguing that the term does not accurately describe trends within left-wing movements and others considering it a reactionary concept. The term was coined by political writer Wesley Yang in a March 4, 2019 Twitter thread discussing diversity in college admissions and among the professional-managerial class; following a tweet arguing that the end-point of an emergent racial ideology is "critical race theory", Yang stated, "This successor ideology has been a rival to the meritocratic one and has in recent years acquired sufficient power to openly seek hegemony on campuses and elsewhere." [9] He expanded on the term in further tweets in May 2019 [10] and in a 2021 blog post, [11] and has appeared on podcasts by The Wall Street Journal and the Manhattan Institute to promote it. [12] [13] Sarah Jeong, writing in The Verge , has argued that the term "seems to only muddy the waters since the thing that [critics of the 'successor ideology'] are concerned about isn't actually a concrete ideology but an inchoate social force with the hallmarks of religious revival." [2] Political writer Osita Nwanevu contends that, counter to the narrative that the successor ideology is fundamentally illiberal, it is actually those who are identified with it who are "protecting—indeed expanding—the bounds of liberalism," while it is those who oppose it—whom he calls "reactionaries"—who are "most guilty of the illiberalism they claim has overtaken the American Left." [3] ^ ^ a b ^ a b c ^ a b ^ ^ ^ ^ ^ ^ ^ ^ ^
2
The Wendelstein 7-X concept proves its efficiency
FULL STORY The optimised Wendelstein 7-X stellarator, which went into operation five years ago, is intended to demonstrate that stellarator-type fusion plants are suitable for power plants. The magnetic field, which encloses the hot plasma and keeps it away from the vessel walls, was planned with great theoretical and computational effort in such a way that the disadvantages of earlier stellarators are avoided. One of the most important goals was to reduce the energy losses of the plasma, which are caused by the ripple of the magnetic field. This is responsible for plasma particles drifting outwards and being lost despite being bound to the magnetic field lines. Unlike in the competing tokamak-type devices, for which this so-called "neo-classical" energy and particle loss is not a major problem, it is a serious weakness in conventional stellarators. It causes the losses to increase so much with rising plasma temperature that a power plant designed on this basis would be very large and thus very expensive. In tokamaks, on the other hand -- thanks to their symmetrical shape -- the losses due to the magnetic field ripple are only small. Here, the energy losses are mainly determined by small vortex movements in the plasma, by turbulence -- which is also added as a loss channel in stellarators. Therefore, in order to catch up with the good confinement properties of the tokamaks, lowering the neoclassical losses is an important task for stellarator optimisation. Accordingly, the magnetic field of Wendelstein 7-X was designed to minimise those losses. In a detailed analysis of the experimental results of Wendelstein 7-X, scientists led by Dr. Craig Beidler from IPP's Stellarator Theory Division have now investigated whether this optimisation leads to the desired effect. With the heating devices available so far, Wendelstein 7-X has already been able to generate high-temperature plasmas and set the stellarator world record for the "fusion product" at high temperature. This product of temperature, plasma density and energy confinement time indicates how close you get to the values for a burning plasma. Such a record plasma has now been analysed in detail. At high plasma temperatures and low turbulent losses, the neoclassical losses in the energy balance could be well detected here: they accounted for 30 percent of the heating power, a considerable part of the energy balance. The effect of neoclassical optimisation of Wendelstein 7-X is now shown by a thought experiment: It was assumed that the same plasma values and profiles that led to the record result in Wendelstein 7-X were also achieved in plants with a less optimised magnetic field. Then the neoclassical losses to be expected there were calculated -- with a clear result: they would be greater than the input heating power, which is a physical impossibility. "This shows," says Professor Per Helander, head of the Stellarator Theory Division, "that the plasma profiles observed in Wendelstein 7-X are only conceivable in magnetic fields with low neoclassical losses. Conversely, this proves that optimising the Wendelstein magnetic field successfully lowered the neoclassical losses." However, the plasma discharges have so far only been short. To test the performance of the Wendelstein concept in continuous operation, a water-cooled wall cladding is currently being installed. Equipped in this way, the researchers will gradually work their way up to 30-minute long plasmas. Then it will be possible to check whether Wendelstein 7-X can also fulfil its optimisation goals in continuous operation -- the main advantage of the stellarators. Background The aim of fusion research is to develop a climate- and environmentally-friendly power plant. Similar to the sun, it is to generate energy from the fusion of atomic nuclei. Because the fusion fire only ignites at temperatures above 100 million degrees, the fuel -- a low-density hydrogen plasma -- must not come into contact with cold vessel walls. Held by magnetic fields, it floats almost contact-free inside a vacuum chamber. The magnetic cage of Wendelstein 7-X is created by a ring of 50 superconducting magnetic coils. Their special shapes are the result of sophisticated optimisation calculations. With their help, the quality of plasma confinement in a stellarator is to reach the level of competing tokamak-type facilities.
1
The Dark and Bloody History Behind Bananas
Member-only story How can we do better as consumers? Ryan Fan p Follow Published in Age of Awareness p 6 min read p Jan 10, 2020 -- 5 Share Photo by Brett Jordan on Unsplash There are a lot of foods that changed the world, and few of them were as influential as the banana. Buy a banana at an American supermarket and you might see the tag “Chiquita” on your banana. For most of my life, I looked at the tag indifferently and didn’t think much about it. That was until a day or two… Follow 12K Followers p Writer for Age of Awareness Believer, Baltimore City IEP Chair, and 2:39 marathon runner. Diehard fan of “The Wire.” Support me by becoming a Medium member: https://bit.ly/39Cybb8 Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
1
Linktree raises $10.7M for its lightweight, link-centric user profiles
Simple, link-centric user profiles might not sound like a particularly ambitious idea, but it’s been more than big enough for Linktree. The Melbourne startup says that 8 million users — whether they’re celebrities like Selena Gomez and Dua Lipa or brands like HBO and Red Bull — have created profiles on the platform, with those profiles receiving more than 1 billion visitors in September. Plus, there are more than 28,000 new users signing up every month day. “This category didn’t exist when we started,” CEO Alex Zaccaria told me. “We created this category.” Zaccaria said that he and his co-founders Anthony Zaccaria and Nick Humphreys created Linktree to solve a problem they were facing at their digital marketing agency Bolster. Instagram doesn’t allow users to include links in posts — all you get is a single link in your profile, prompting the constant “link in bio” reminder when someone wants to promote something. Meanwhile, most of Bolster’s clients come from music and entertainment, where a single link can’t support what Zaccaria said is a “quite fragmented” business model. After all, an artist might want to point fans to their latest streaming album, upcoming concert dates, an online store for merchandise and more. A website could do the job in theory, but they can be clunky or slow on mobile, with users probably giving up before they finally reach the desired page. So instead of constantly swapping out links in Instagram and other social media profiles, a Linktree user includes one evergreen link to their Linktree profile, which they can update as necessary. Selena Gomez, for example, links to her latest songs and videos, but also her Rare Beauty cosmetics brand, her official store and articles about her nonprofit work. Zaccaria said that after launching the product in 2016, the team quickly discovered that “a lot more people had the same problem,” leading them to fully separate Linktree and Bolster two years ago. Since then, the company hasn’t raised any outside funding — until now, with a $10.7 million Series A led by Insight Partners and AirTree Ventures. (Update: Strategic investors in the round include Twenty Minute VC’s Harry Stebbings, Patreon CTO Sam Yam and Culture Amp CTO Doug English.) “We had the option to just continue to grow sustainably, but we wanted to pour some fuel on the fire,” Zaccaria said. In fact, Linktree has already grown from 10 to 50 employees this year. And while the company started out by solving a problem for Instagram users, Zaccaria described it as evolving into a much broader platform that can “unify your entire digital ecosystem” and “democratize digital presence.” He said that while some customers continue to maintain “a giant, brand-immersive website,” for others, Linktree is completely replacing the idea of a standalone website. Zaccaria added that Instagram only represents a small amount of Linktree’s current traffic, while nearly 25% of that traffic now comes from direct visitors. Black Lives Matter has also been a big part of Linktree’s recent growth, with activists and other users who want to support the movement using their profiles to point visitors to websites where they can donate, learn more and get involved. In fact, Linktree even introduced a Black Lives Matter banner over the summer that anyone could add to their profile. Linktree is free to use, but you have to pay $6 a month for Pro features like video links, link thumbnails and social media icons. Zaccaria said that the new funding will allow the startup to add more “functionality and analytics.” He’s particularly eager to grow the data science and analytics team, though he emphasized that Linktree does not collect personally identifiable information or monetize visitor data in any way — he just wants to provide more data to Linktree users. In a statement, Insight Managing Director Jeff Lieberman said: As the internet becomes increasingly fragmented, brands, publishers, and influencers need a solution to streamline their content sharing and connect their social media followers to their entire online ecosystem, ultimately increasing brand awareness and revenue. Linktree has successfully created this new “microsite” category enabling companies to monetize the next generation of the internet economy via a single interactive hub. The impressive traction and growing number of customers Linktree has gained over the last few months demonstrates its proven market fit, and we could not be more excited to work with the Linktree team as they transition to the ScaleUp phase of growth. Tap Bio’s mini-sites solve Instagram’s profile link problem
9
Relearn CSS Layout
Learn to write better, resilient CSS If you find yourself wrestling with CSS layout, it’s likely you’re making decisions for browsers they should be making themselves. Through a series of simple, composable layouts, Every Layout will teach you how to better harness the built-in algorithms that power browsers and CSS. Buy Every Layout For $69 Read the free rudiments and axioms Already purchased Every Layout, but lost your access? No worries. Add the email address that you used to purchase Every Layout and we’ll re-send your access link. Do not complete Your email Get my access link Every Layout has helped thousands of developers and companies simplify CSS layout in their projects Employing algorithmic layout design means doing away with @media breakpoints, “magic numbers”, and other hacks, to create context-independent layout components. Your future design systems will be more consistent, terser in code, and more malleable in the hands of your users and their devices. Every Layout is now in its 3rd edition and has helped some of the largest companies in the world achieve exactly this. Let these happy folks tell you how Every Layout helped them Kevin Powell I can’t recommend Every Layout enough. Fantasic for all of the layouts you can use in your projects obviously, but also for how much you’ll learn about flexbox, and CSS in general. Amie Chen CSS is one of the few things I’m comfortable with, but I’m still learning A TON reading Every Layout. Such a great resource! Josh Tumath Even two years later, Every Layout is still the best resource for learning common, intrinsic layout patterns. It’s revolutionized our design system at the BBC. I’m always sharing it with colleagues who want more experience with CSS. Mariana Cortés Rueda This is dev love in form of a resource guide and you’d do well in reviewing it and sharing it. Yes. AGAIN. Thank you Heydon and Andy for this piece of niche art. Jess Peck Every Layout is a fantastic resource, a great reference, and also has really helped me understand the structure and styling decisions that go into building websites. Chris Weekly Every Layout has fantastic free content, but the full price for all the materials (book, site, components) has had absurdly high ROI for me. I spent less than an hour’s consulting wages, and it’s been transformative - a gift that keeps giving. Highest possible recommendation P J Łaszkowicz Started web development in 1997 and approached the same problems with CSS with updated solutions over the years. Even after all this time references like Every Layout from Andy and Heydon are invaluable materials for re-reading and improving. Best practices do work Some influential people have framed CSS as a flawed technology. They’ve encouraged CSS authors to brute force layout in ways that don’t make the most of CSS’s features. With our introductory chapters, the “rudiments”, we catch you up on just how smart and elegant modern CSS can be. What you learn in these chapters is then applied to 12 specially designed, modular layout solutions, documented with customizable code generators and implemented as handy custom elements. We teach you best practices that are guaranteed to make you a better, well-rounded CSS programmer, whether you are a full-stack developer, a designer, a back-end developer or even a ${yourJobTitleHere}. Read the free rudiments and axioms The Stack readforfree The Box The Center The Cluster The Sidebar readforfree The Switcher The Cover readforfree The Grid The Frame The Reel The Imposter The Icon Meet the authors who brought you Every Layout Heydon Pickering Heydon is a front-end developer and technical writer specializing in inclusive interface design. They have written and edited a number of books on designing for the web. Every Layout is his second to be republished in Japan. Heydon has consulted organizations like Spotify, The BBC, and SpringerNature, helping them to code and document accessible design systems. They also own an online gallery that lets you create and print unique, generative artworks. Andy Bell Andy is a designer and front-end developer who founded Set Studio: an agency who specialise in producing stunning websites that work for everyone. Andy has also spent well over a decade specializing in simplifying CSS to make it scale-up for design system projects—in some cases, accessible to millions of people—for some of the largest organizations in the world, such as Google, Harley-Davidson, Vice Media and the NHS. He also wrote the majority of Learn CSS, a CSS course by web.dev.
7
Portugal Makes It Illegal for Your Boss to Text You After Work
Photo: Getty stock image The Portuguese parliament has passed new labour laws to give workers a healthier work-life balance and to attract “digital nomads” to the country. Employers could face penalties for contacting employees outside work hours, according to the new laws. The legislation, approved on Friday, comes following the expansion of home working after the coronavirus pandemic, according to Portugal’s Socialist Party government. Advertisement Under new rules, employers could be penalised for contacting employees after work and will be forced to pay for increased expenses as a result of working from home – such as gas and electricity bills. News Belarus Accused of Forcing Thousands of Migrants Across the Polish Border 11.08.21 Further rules will be implemented to aid employees at home, such as banning employers from monitoring their workers at home and ensuring workers must meet with their boss every two months to stop isolation. Not all legislation designed to help home workers passed through parliament, however. The so-called “Right to Disconnect” – a law giving workers the ability to switch off work devices – was not voted through. "The pandemic has accelerated the need to regulate what needs to be regulated," Portugal's Minister of Labour and Social Security, Ana Mendes Godinho said during the Web Summit tech conference in Lisbon last week. "Telework can be a 'game changer' if we profit from the advantages and reduce the disadvantages. "We consider Portugal one of the best places in the world for these digital nomads and remote workers to choose to live in, we want to attract them to Portugal," she said.
2
La Violencia
i (Spanish pronunciation:  [la βjoˈlensja] , The Violence) was a ten-year civil war in Colombia from 1948 to 1958, between the Colombian Conservative Party and the Colombian Liberal Party, fought mainly in the countryside. [1] [2] [3] i is considered to have begun with the assassination on 9 April 1948 of Jorge Eliécer Gaitán, a Liberal Party presidential candidate and frontrunner for the 1949 November election. [4] His murder provoked the Bogotazo rioting, which lasted ten hours and resulted in around 5,000 casualties. [4] An alternative historiography proposes the Conservative Party's return to power following the election of 1946 to be the cause. [4] Rural town police and political leaders encouraged Conservative-supporting peasants to seize the agricultural lands of Liberal-supporting peasants, which provoked peasant-to-peasant violence throughout Colombia. [4] La Violencia is estimated to have cost the lives of at least 200,000 people, almost 2% of the population of the country at the time. [5] [6] [7] In September 1949, Senator Gustavo Jiménez was assassinated mid-session, in Congress. [8] The i conflict took place between the Military Forces of Colombia and the National Police of Colombia supported by Colombian Conservative Party paramilitary groups on one side, and paramilitary and guerrilla groups aligned with the Colombian Liberal Party and the Colombian Communist Party on the other side. The conflict caused millions of people to abandon their homes and property. Media and news services failed to cover events accurately for fear of revenge attacks. The lack of public order and civil authority prevented victims from laying charges against perpetrators. Documented evidence from these years is rare and fragmented.[ p ] The majority of the population at the time was Catholic. During the conflict there were press reports that Catholic Church authorities supported the Conservative Party. Several priests were accused of openly encouraging the murder of the political opposition during Catholic mass, including the Santa Rosa de Osos Bishop Miguel Ángel Builes, although this is unproven. No formal charges were ever presented and no official statements were made by the Holy See or the Board of Bishops. These events were recounted in the 1950 book Lo que el cielo no perdona ("What heaven doesn't forgive"), written by the secretary to Builes, Father Fidel Blandon Berrio. [9] [10] Eduardo Caballero Calderón also recounted these events in his 1952 book El Cristo de Espaldas ("Backwards Christ"). After releasing his book, Blandon resigned from his position and assumed a false identity as Antonio Gutiérrez. However, he was eventually identified and legally charged and prosecuted for libel by the Conservative Party. [10] As a result of i there were no liberal candidates for the presidency, congress, or any public corporations in the 1950 elections. The press accused the government of pogroms against the opposition. Censorship and reprisals were common against journalists, writers, and directors of news services; in consequence many media figures left the country. Jorge Zalamea, director of Critica magazine, fled to Buenos Aires; Luis Vidales to Chile; Antonio Garcia to La Paz, and Gerardo Molina to Paris.[ p ] Painting of Eliseo Velásquez leading guerrilla forces. Fernando Botero "Guerrilla de Eliseo Velásquez" (1988). Ever since the 1920s, Conservatives have held the majority of governmental power and have dominated Colombia for 150 years up until the 2002 election of 'Alvaro Uribe' which broke the duopoly. Even when Liberals gained control of the government in the 1930s, there was tension and even violent outbursts between the peasants and landowners, and workers and industry owners. [11] The number of yearly deaths, however, were far less than the estimates of those in La Violencia. [11] In the 1946 election, Mariano Ospina Pérez of the Conservative party won the presidency, largely because the Liberal votes were split between two Liberal candidates. [12] Mariano Ospina Pérez and the Conservative party Government used the police and army to repress the Liberal party. Their response was to fight back with violent protests. This led to an increasing amount of pressure within political and civil society. [13] Some consider La Violencia having started at this point because the Conservative government began increasing the backlash against Liberal protests and small rebel groups. [14] There were an estimated 14,000 deaths in 1947 due to this violence. [11] On April 9, 1948, Liberal Party leader Jorge Eliécer Gaitán was assassinated by Juan Roa Sierra on the street in Bogotá, via three shots from a revolver. [15] Gaitán was a popular candidate and would have been the likely winner of the 1950 election. [11] [15] This began the Bogotazo as angry mobs beat Roa Sierra to death and headed to the presidential palace with the intent of killing President Ospina Pérez. [15] The murder of Gaitán and subsequent rioting sparked other popular uprisings throughout the country. [11] Because of the Liberal nature of these revolts, the police and military, who had been largely neutral before, either defected or became aligned with the Conservative government. [11] [15] Initially, Liberal leaders in Colombia worked with the Conservative government to stop uprisings and root out Communists. [11] [15] In May 1949, Liberal leaders resigned from their positions within the Ospina Pérez administration, due to the widespread persecution of Liberals throughout the country. [15] Attempting to end La Violencia, the Liberals, who had majority control of Congress, began impeachment proceedings against President Ospina Pérez on November 9, 1949. [15] In response, Ospina Pérez dissolved the Congress, creating a Conservative dictatorship. The Liberal Party decided to stage a military coup, and it was planned for November 25, 1949. [15] However, many of the party members decided it was not a good idea and called it off. One conspirator, Air Force Captain Alfredo Silva, in the city of Villavicencio, had not been notified of the abandonment of the plan and carried it out. After rallying the Villavicencio garrison, he disarmed the police and took control of the city. [15] Silva proceeded to urge others in the region to join the revolt, and Eliseo Velásquez, a peasant guerrilla leader, took Puerto López on December 1, 1949, as well as capturing other villages in the Meta River region. [15] In this time, Silva was caught and arrested by troops from Bogotá coming to take back control of Villavicencio. [15] In 1950, Laureano Gómez was elected president of Colombia, but it was a largely manipulated election, leading Gómez to become the new Conservative dictator. [16] After Alfredo Silva's disappearance, Velásquez assumed power of the forces in the Eastern Plains that, by April 1950, included seven rebel zones with hundreds of guerrillas known as the "cowboys". [15] While in command of the forces, Velásquez suffered from a superiority complex, leading him to commit abuses including body mutilation of those killed. [15] Without sufficient arms, during the first major offensive of the Conservative army, the Liberal forces took major losses and confidence in Velásquez was lost. [15] New populist leaders took control of the different groups of rebels and eventually came together to impose a 10% tax on wealthy landowners in the region. [15] This tax created divisions from the wealthy Liberals and the Conservative government used them to recruit counter guerrillas. The Conservative army then increased its offensive attacks; committing atrocities along the way, they burned entire villages, slaughtered animals, and massacred suspected rebels, as well as set up a blockade of the region. [15] The rebels were able to combat the offensive with small, covert, attacks to capture outposts and supplies. By June 1951, the government agreed to a truce with the guerrilla forces and they temporarily lifted the blockade. [15] A few months after the truce, larger army units were sent to the Eastern Plains to end the Liberal revolt, but they were still unsuccessful. [15] In this time, the Liberal leadership in Bogotá realized the Conservatives were not giving up power any time soon, and they wanted to organize a national revolt. In December 1951 and January 1952, Alfonso López Pumarejo, the former Colombian president and leader of the Liberal Party, made visits to the Eastern Plains to renew his alliance with the "cowboys". [15] When López Pumarejo returned to Bogotá he issued declarations stating that the guerrillas were not criminals but were simply fighting for freedom, and in response the Conservative dictatorship shut down the newspapers and imposed strict censorship. [15] 1952 passed with only small skirmishes and no organized guerrilla leader, but by June 1953, Guadalupe Salcedo had assumed command. [15] In other parts of Colombia, different rebel groups had formed in throughout 1950; they formed in Antioquia, Tolima, Sumapaz, and the Middle Magdalena Valley. [15] On January 1, 1953, these groups came together to launch an attack against the Palanquero Air Base, with the hope of using the jet planes to bomb Bogotá and force the resignation of the Conservative dictatorship. [15] The attack relied entirely on surprise to be successful, but the rebels were spotted by the sentry posts and were quickly hit with machine gun fire. [15] The attempt was a failure, however it did incite fear into Bogotá elites. Most of the armed groups (called guerrillas liberales, a pejorative term) were demobilized during the amnesty declared by General Gustavo Rojas Pinilla after he took power on 13 June 1953. The most prominent Guerrilla leaders, Guadalupe Salcedo and Juan de la Cruz Varela, signed the 1953 agreement. Some of the guerrilleros did not surrender to the government and organized into criminal bands or bandoleros, which caused intense military operations against them in 1954. One of them, the guerrillero leader Tirofijo, had changed his political and ideological inclinations from being a Liberal to supporting the Communists during this period, and eventually he became the founder of the communist Revolutionary Armed Forces of Colombia or FARC. Rojas was removed from power on 10 May 1957. Civilian rule was restored after moderate Conservatives and Liberals, with the support of dissident sectors of the military, agreed to unite under a bipartisan coalition known as the National Front and the government of Alberto Lleras Camargo and which included a system of alternating the president and power-sharing both in cabinets and public offices. In 1958, Lleras Camargo ordered the creation of the Commission for the Investigation of the Causes of "La Violencia". The commission was headed by the Bishop Germán Guzmán Campos. The last bandolero leaders were killed in combat against the army. Jacinto Cruz Usma, alias Sangrenegra (Blackblood), died in April 1964 and Efraín Gonzáles in June 1965. Due to incomplete or non-existent statistical records, exact measurement of La Violencia's humanitarian consequences is impossible. Scholars, however, estimate that between 200,000 and 300,000 people died; 600,000 to 800,000 were injured; and almost one million people were displaced. La Violencia directly or indirectly affected 20 percent of the population. [17] i did not acquire its name simply because of the number of people it affected; it was the manner in which most of the killings, maimings, and dismemberings were done. Certain death and torture techniques became so commonplace that they were given names—for example, i, which involved slowly cutting up a living person's body; or i, where hundreds of small punctures were made until the victim slowly bled to death. Former Senior Director of International Economic Affairs for the United States National Security Council and current President of the Institute for Global Economic Growth, Norman A. Bailey describes the atrocities succinctly: "Ingenious forms of quartering and beheading were invented and given such names as the 'corte de mica', 'corte de corbata' (aka Colombian necktie), and so on. Crucifixions and hangings were commonplace, political 'prisoners' were thrown from airplanes in flight, infants were bayoneted, schoolgirls, some as young as eight years old, were raped en masse, unborn infants were removed by crude Caesarian section and replaced by roosters, ears were cut off, scalps removed, and so on." [17] While scholars, historians, and analysts have all debated the source of this era of unrest, they have yet to formulate a widely accepted explanation for why it escalated to the notable level it did. As a result of La Violencia, landowners were allowed to create private armies for their security, which was formally legalized in 1965. Holding private armies was made illegal in 1989, only to be made legal once more in 1994. [18] Historical interpretations The death of the bandoleros and the end of the mobs was not the end of all the violence in Colombia. One communist guerrilla movement, the Peasant Student Workers Movement, started its operations in 1959. [19] Later, other organizations such as the FARC and the National Liberation Army emerged, marking the beginning of a guerrilla insurgency. Credence in conspiracy theories as causes of violence As was common of 20th-century eliminationist political violence, the rationales for action immediately before i were founded on conspiracy theories, each of which blamed the other side as traitors beholden to international cabals. The left were painted as participants in a global Judeo-Masonic conspiracy against Christianity, and the right were painted as agents of a Nazi-Falangist plot against democracy and progress. Anticlerical conspiracy theory After the death of Gaitán, a conspiracy theory which was circulated by the left, that leading conservatives, militant priests, Nazis and Falangists were involved in a plot to take control of the country and undo the country's moves toward progress, spurred the violence. [20] This conspiracy theory supplied the rationale for Liberal Party radicals to engage in violence, notably the anti-clerical attacks and killings, particularly in the early years of La Violencia. Some propaganda leaflets circulating in Medellín blamed a favorite of anti-Catholic conspiracy theorists, the Society of Jesus (Jesuits), for the murder of Gaitán. [21] Across the country, militants attacked churches, convents, and monasteries, killing priests and looking for arms, because they believed that the clergy had guns, a rumor which was proven to be false when no serviceable weapons were found during the raids. [20] One priest, Pedro María Ramírez Ramos, was slaughtered with machetes and hauled through the street behind a truck, despite the fact that the militants had previously searched the church grounds and found no weapons. [21] Despite the circulation of the conspiracy theories and the propaganda after Gaitán was killed, most of the leftists who were involved in the rioting on 9 April learned from their errors, and as a result, they stopped believing that priests had harbored weapons. [22] The belief in the existence of some sort of conspiracy, a belief which was adhered to by members of both camps, made the political environment toxic, increasing the animosity and the suspicion which existed between both parties. [23] Judeo-Masonic conspiracy theory The Conservatives were also motivated by their belief in the existence of a supposed international Judeo-Masonic conspiracy. In their view, they would prevent the Judeo-Masonic conspiracy from coming to fruition by eliminating the Liberals who were in their midst. [24] In the two decades prior to La Violencia, Conservative politicians and churchmen adopted from Europe the Judeo-Masonic conspiracy theory to portray the Liberal Party as involved in an international anti-Christian plot, with many prominent Liberal politicians actually being Freemasons. [25] Although most of the rhetoric of conspiracy was introduced and circulated by some of the clergy, as well as by Conservative politicians, by 1942, many clerics became critical of the Judeo-Masonic conspiracy theory. Jesuits outside Colombia had already questioned and published refutations of the authenticity of The Protocols of the Elders of Zion , disproving the concept of a global Judeo-Masonic conspiracy. Regarding this same matter, Colombian clergy also came under the increasing influence of U.S. clergy; and Pius XI asked U.S. Jesuit John LaFarge, Jr. to draft an encyclical against anti-Semitism and racism. [26] The belief in the existence of a Judeo-Masonic conspiracy played a prominent role in the politics of Laureano Gómez, who lead the Colombian Conservative Party from 1932 to 1953. [27] More provincial politicians followed suit, and the fact that prominent national and local politicians voiced this conspiracy theory, rather than just a portion of the clergy, gave the idea greater credibility while it gathered momentum among the party's members. The atrocities that were committed at the outset of the Spanish Civil War in 1936 were seen by both sides as a possible precedent for Colombia, causing both sides to fear that it could also happen in their country; this belief also spurred the credibility of the conspiracies and it also served as a rationale for violence. [23] anticlerical violence in the Republican zones in Spain in the first months of that war when anarchists, left-wing socialists and independent communists burned churches and murdered nearly 7,000 priests, monks, and nuns, and used this to justify their own mass killings of Jews, Masons, and socialists. [23] ^ [ p ] ^ ^ ^ a b c d ^ Britannica, 15th edition, 1992 printing[ p ] ^ [ p ] ^ [ p ] ^ ^ ^ a b Article at URL contains a short English-language abstract. PDF is full article in Spanish. ^ a b c d e f g ^ ^ Burnyeat, G. (2018). Chocolate, Politics and Peace-Building. Springer. ^ ^ a b c d e f g h i j k l m n o p q r s t u v w x ^ ^ a b ^ ^ [1] Archived June 26, 2007, at the Wayback Machine ^ a b Williford 2005, p. 218. ^ a b Williford 2005, p. 277. ^ Williford 2005, p. 278. ^ a b c Williford 2005, p. 185. ^ Williford 2005, p. 217. ^ Williford 2005, p. 142. ^ Williford 2005, p. 197. ^ Williford 2005, p. 178.
1
Calvin and Hobbes Inspired a Generation (2013)
Since its concluding panel in 1995, Calvin and Hobbes has remained one of the most influential and well-loved comic strips of our time. Calvin and Hobbes follows a six-year-old boy, Calvin, and his stuffed tiger, Hobbes, as they explore the world around them. Bill Watterson, the creator of the comic, drew 3,160 strips over ten years and notably refused to license his characters for commercial purposes. A forthcoming feature-length documentary, Dear Mr. Watterson , examines the far-reaching appeal and impact of Calvin and Hobbes, 18 years after Watterson stopped inking new panels. In an excerpt of the film above, comic strip artists speak to how they were personally influenced by Calvin and Hobbes. The full film is a fascinating exploration into the artistic and philosophical harmony of the strip, narrated by filmmaker Joel Allen Schroeder. In an interview with The Atlantic's Video channel, Schroeder talks about the comic strip and his experience making Dear Mr. Watterson: The Atlantic: What do you think sets Calvin and Hobbes apart from other comic strips? Joel Allen Schroeder: Watterson is clearly an amazing artist. You look at his Sunday strips in particular and the imagery just sucks you in. And then you add in his wonderful writing, the depth of his characters, Calvin's limitless world and the humanity that is present in Calvin and Hobbes and I think those are some of the ingredients that make the magic. And then when you consider Watterson's high standards for his work and his respect for the comic strip medium, I think that is what takes Calvin and Hobbes to another level. Since Calvin and Hobbes ceased publication in 1995, how do you think the perception and impact of the strip has changed? Because of Watterson's decision to avoid endless merchandising of his characters, we can only get our Calvin and Hobbes fix by digging out our books. We're not saturated with Calvin and Hobbes in commercials or on toy shelves or at theme parks and that helps to keep the strip intact. It isn't watered down, longtime fans still crave it, and new readers will always be introduced to the characters in their original, intended medium. Watterson has said it before, and stated the same idea again more recently: "Calvin and Hobbes was designed to be a comic strip and that's all I want it to be. It's the one place where everything works the way I intend it to." Watterson wrote the strip in such a way that it is timeless, and as long as people continue to read it and pass it along, I think it only stands out more as a work of excellence. Why did you choose to title the film Dear Mr. Watterson? The inspiration for the film came out of the idea of writing a letter to Bill Watterson, but that letter never happened. I'd never know what to say, and so, in a way, that unwritten letter sort of transmogrified into this film. The name has always felt appropriate, even though I hope that viewers see the film as more than strictly a love letter of sorts. Did you attempt to reach out to Bill Watterson for the film, and has he given any thoughts about the final product? I knew of Watterson's private nature before the project began, and I never wanted to make the film a search for him. The mythology surrounding Bill Watterson is often focused on his status as a "recluse" (a word I really don't like), but I really think the fascinating story has to do with how he put ink to paper for a decade and created something that really means something special to so many people from around the world—and continues to hold that significance. The minute we start tracking him down, that more important story becomes secondary. Bill Watterson has, indeed, seen the final film, and we know that he appreciated our choices to make it less intrusive. What do you hope your film will add to the conversation surrounding the comic strip? First of all, I hope anybody who sees, or even just hears about the film, will be inspired to search out their Calvin and Hobbes collections and find a cozy place to sit down and go exploring for a little while. Beyond that, I hope it promotes conversation about art and the impact of art. I think wonderful and amazing things that have a positive impact in our world should be talked about and celebrated. I'm pretty sure if that continues to happen, other people will be inspired to make more wonderful and amazing things that have a positive impact. And if this film prompts viewers to introduce Calvin and Hobbes to a friend or a family member, or even a stranger ... that's perfect. Dear Mr. Watterson begins its theatrical run and will be available for download on November 15. To learn more about the film and check theater release dates, please visit DearMrWatterson.com.
7
FDA approves Gilead's remdesivir as coronavirus treatment
The FDA approved Gilead Sciences' antiviral drug remdesivir as a treatment for the coronavirus. The intravenous drug has helped shorten the recovery time of some hospitalized Covid-19 patients. Remdesivir is now the first and only fully approved treatment in the U.S. for Covid-19. watch now The Food and Drug Administration on Thursday approved Gilead Sciences ' antiviral drug remdesivir as a treatment for the coronavirus. In May, the FDA granted the drug an emergency use authorization, allowing hospitals and doctors to use it on patients hospitalized with the disease even though the medication had not been formally approved by the agency. The intravenous drug has helped shorten the recovery time of some hospitalized Covid-19 patients. It was one of the drugs used to treat President Donald Trump, who tested positive for the virus earlier this month. The drug will be used for Covid-19 patients at least 12 years old and requiring hospitalization, Gilead said. Remdesivir is now the first and only fully approved treatment in the U.S. for Covid-19, which has infected more than 41.3 million people worldwide and killed more than 1 million, according to data compiled by Johns Hopkins University. watch now Shares of Gilead were up more than 5% in after-hours trading. "Since the beginning of the COVID-19 pandemic, Gilead has worked relentlessly to help find solutions to this global health crisis," Gilead CEO Daniel O'Day said in a statement. "It is incredible to be in the position today, less than one year since the earliest case reports of the disease now known as COVID-19, of having an FDA-approved treatment in the U.S. that is available for all appropriate patients in need." Remdesivir is approved or authorized for temporary use as a Covid-19 treatment in approximately 50 countries worldwide, according to the company. The drug is administered in a hospital setting via an IV. The company said the medication should only be administered in a hospital or in a health-care setting capable of providing acute care comparable with inpatient hospital care. Earlier this month, a study coordinated by the World Health Organization had indicated that the drug had "little or no effect" on death rates among hospitalized patients. Still, it has shown to be modestly effective in reducing the recovery time for some hospitalized patients. Earlier in the year, Dr. Anthony Fauci, the nation's leading infectious disease expert, said the drug would set "a new standard of care" for Covid-19 patients. The majority of patients treated with remdesivir receive a five-day course using six vials of the drug. The company is also developing an inhaled version of the medication, which it will administer through a nebulizer, a delivery device that can turn liquid medicines into mist. The company has said the drug can't be administered in pill form because its chemical makeup would impact the liver. Remdesivir, now under the brand name Veklury, costs $2,340 for a five-day treatment course for people covered by government health programs and other countries' health-care systems, and $3,120 for U.S. patients with private health coverage. In August, the company said it planned to produce more than 2 million treatment courses of remdesivir by the end of the year and anticipated being able to make "several million more" in 2021, adding it has increased the supply of the drug more than fiftyfold since January. Its manufacturing network now includes more than 40 companies in North America, Europe and Asia. The company said Thursday it is meeting real-time demand for the drug in the United States and anticipates meeting global demand this month, even in the event of potential future surges of Covid-19 cases.
1
Open Mobile Maps: Lightweight and Modern Map SDK for Android / iOS
{{ message }} openmobilemaps/maps-core You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
2
Fiscal Inflation
This is an essay, prepared for the CATO 39th annual monetary policy conference.  It will appear in a CATO book edited by Jim Dorn. This is a longer and more academic piece underlying "The ghost of Christmas inflation." Video of the conference presentation. This essay in pdf form. From its inflection point in February 2021 to November 2021, the CPI rose 6 percent (278.88/263.161), an 8 percent annualized rate.  Why? Starting in March 2020, in response to the disruptions of Covid-19, the U.S. government created about $3 trillion of new bank reserves, equivalent to cash, and sent checks to people and businesses. (Mechanically, the Treasury issued $3 trillion of new debt, which the Fed quickly bought in return for $3 trillion of new reserves. The Treasury sent out checks, transferring the reserves to people’s banks. See Table 1.)  The Treasury then borrowed another $2 trillion or so, and sent more checks. Overall federal debt rose nearly 30 percent. Is it at all a surprise that a year later inflation breaks out?  It is hard to ask for a clearer demonstration of fiscal inflation, an immense fiscal helicopter drop, exhibit A for the fiscal theory of the price level (Cochrane 2022a, 2022b). What Dropped from the Helicopter? From December 2019 to September 2021, the M2 money stock also increased by $5.6 trillion.  This looks like a monetary, not a fiscal intervention, Milton Friedman’s (1969) classic tale that if you want inflation, drop money from helicopters. But is it monetary or fiscal policy? Ask yourself: Suppose the expansion of M2 had been entirely financed by purchasing Treasury securities. Imagine Treasury debt had declined $5 trillion while M2 and reserves rose $5 trillion. Imagine that there had been no deficit at all, or even a surplus during this period. The monetary theory of inflation, MV=PY, states that we would see the same inflation. Really? Similarly, ask yourself: Suppose that the Federal Reserve had refused to go along. Suppose that the Treasury had sent people Treasury bills directly, accounts at Treasury.gov, along with directions how to sell them if people wished to do so. Better, suppose that the Treasury had created new mutual funds that hold Treasury securities, and sent people mutual fund shares. (I write mutual fund as money market funds are counted in M2.) The monetary theory of inflation says again that this would have had no effect. These would be a debt issue, causing no inflation, not a monetary expansion. Really? Clearly, overall debt matters, not the split of government debt between interest-paying reserves or monetary base and Treasury securities. The Federal Reserve itself is nothing more than an immense money-market fund, offering shares that are pegged at $1 each, pay interest, and are backed by a portfolio of Treasury and mortgage-backed securities. (Plus, an army of regulators and a huge staff of economists who are supposed to help forecast inflation.) Milton Friedman’s (1969) helicopter drop is a powerful parable. But a helicopter drop is a fiscal policy, not a monetary policy. The U.S. Federal Reserve may not legally drop money from helicopters; it may not write checks to voters. The Fed may even less vacuum up money; it may not tax people. Helicopter drops and money vacuums are fiscal operations. The Fed may only lend money, or buy and sell assets. To accomplish a helicopter drop in the United States, the Treasury must issue debt, the Fed must buy it with newly printed money, and then the Treasury must drop that money from helicopters, writing it down as a transfer payment. And that is pretty much exactly what happened. Ask yourself: If, as Friedman’s helicopter is dropping $1,000 on each household, the Fed sends burglars who remove $1,000 of Treasury securities from the same households, would we still see inflation? That’s monetary policy. If Friedman’s helicopter were followed by the Treasury secretary with a bullhorn, shouting “Enjoy your $1,000 in helicopter money. Taxes are going up $1,000 tomorrow,” would we still see inflation? Friedman’s helicopters are not a monetary change, a substitution of money for debt, an increase in the liquidity of a given set of household assets. They are a “wealth effect” of government debt. Dropping debt from helicopters is a brilliant psychological device for convincing people that the government debt raining down on them will not be repaid by future taxes or spending restraint. It will be left outstanding, so they had better spend it now. Indeed, we just witnessed a “helicopter drop.” But a helicopter drop is fiscal policy. Why did fiscal inflation not happen sooner? The government has been borrowing money like the proverbial drunken sailor, for decades. The Fed has been buying Treasury securities and turning the debt into reserves for a decade. Why now? Inflation comes when government debt increases, relative to people’s expectations of what the government will repay. If the Treasury borrows, but everyone understands it will later raise tax revenues or cut spending to repay the debt, that debt does not cause inflation. It is a good investment, and people are happy to hold on to it. If the Fed prints up a lot of money, buys Treasury debt, and the Treasury hands out the money, as happened, but everyone understands the Treasury will pay back the debt with future surpluses, the extra money causes no inflation. The Fed can always soak up the money by selling its Treasury securities, and the Treasury repays those securities with surpluses (i.e., taxes less spending). The 2020–2021 borrowing and money episode was distinctive because, evidently, it came without a corresponding increase in expectations that the government would, someday, raise surpluses by $5 trillion in present value to repay the debt.  Looking in to people’s heads is hard, but why? We can at least find some plausible speculations. One may look to politicians’ statements. Even in the Obama-era “stimulus” spending, the administration emphasized promises of eventual debt reduction. One may chuckle and sneer at promises to repay debts decades after an administration leaves office, but at least they went through the motions to make that promise! Nobody went through any motions about long-run fiscal planning, long-run deficit reduction, and entitlement and tax reform, in 2020–2021.  It was the era of modern monetary theory (MMT), of costless “fiscal expansion” made possible, or so it was claimed, by the manna-from-heaven that interest rates would stay low forever. The manner of fiscal expansion matters too. When the Treasury borrows in the usual manner, it borrows from established long-term investors, who view Treasury debt as debt that will be repaid and not defaulted or inflated. They view it as a savings or investment vehicle, not as cash to be spent. They save, or invest, based on long and so-far mostly successful experience. This time, following canary-in-the-coal mine disruptions in Treasury markets during March 2020, the Federal Reserve immediately bought new Treasury debt with newly created money, before it even touched these investor’s portfolios. The effect of the operation was to print new money and send people checks, so the debt issue is now held as bank deposits flowing into reserves, rather than as Treasury securities. People holding this new money are likely to spend it rather than regard it as a long-term investment.  In our simplest economic models, it does not matter who holds the debt. But in just a little more nuanced view, who holds the debt matters. In our simplest economic models, interest-paying reserves and Treasuries are equivalent securities. But people likely do see a difference between reserves and short-run Treasuries. Treasuries may well carry a reputation that they will be repaid, while people assume reserves will not be repaid by larger surpluses. Then issuing lots of reserves rather than Treasuries is inflationary, but not because the reserves are ``money,’’ but rather because they convey a different set of fiscal expectations, just as dropping money or debt from helicopters sends a different signal about repayment than issuing debt at a Treasury auction. Most of the previous operations financed government spending or government worker salaries, counting on higher incomes to slowly filter through the economy. This one sent checks directly to people. Finally, this fiscal stimulus was enormous, and carried out on a deep misdiagnosis of the state of the economy. Even in simplistic hydraulic Keynesian terms, $5 trillion times any multiplier is much larger than any plausible GDP gap. And the Covid recession was not due to a demand deficiency in the first place. A pandemic is, to the economy, like a huge snowstorm. Sending people money will not get them to go out to closed bars, restaurants, airlines, and businesses. “Stimulus,” “accommodation,” “easing” was the point. This method finally worked, where previous stimulus efforts failed. One can see several suggestive differences, which amount to important economic lessons. What about “supply shocks?” What about a shift of demand from services to durables? Much analysis misses the difference between relative prices and inflation, in which all prices and wages rise together. A supply shock makes one good more expensive than others. Only demand makes all goods rise together. There wouldn’t be “supply chain” problems if people were not trying to buy things like mad! A shift in demand from services to durables can make durable prices go up. But it would make services prices go down. And let us not even go down the ridiculous path of blaming inflation on a sudden contagious outbreak of “greed” and “collusion” by businesses from oil companies to turkey farmers, needing the administration to send the FTC out to investigate. It is telling that inflation was a complete surprise to the Federal Reserve. The Federal Reserve’s job is supposed to be to monitor the supply capacity of the economy and to make sure demand does not outstrip it.  The Fed failed twice. First, the economy did not need demand-side stimulus. Insurance was wise, and forestalling a financial crisis was necessary. But sending money to every citizen to stoke demand was not. Second, the Fed being surprised by supply shocks is as excusable as the Army losing a battle because its leaders are surprised that the enemy might attack.  As we see by the outcome, the Fed’s understanding of supply, largely based on statistical analysis of labor markets, is rudimentary. WilI Inflation Continue? If the government borrows or prints $5 trillion, with no change in its plan to repay debt, on top of $17 trillion outstanding debt, then the price level will rise a cumulative 30 percent, so that the $22 trillion of debt is worth in real terms what the $17 trillion was before. In essence, absent a credible increase in future surpluses, the deficit is financed by defaulting on $5 trillion of outstanding debt, via inflation. By this calculation, the 6 percent or so cumulative inflation we have seen so far leaves a way to go.  But people may think some of the debt will be repaid. If they think half will eventually be repaid, then the price level need only rise 15 percent overall. But then it stops. A one-time unbacked debt increase leads to a one-time price-level increase, not continuing inflation. Whether inflation continues or not depends on future  monetary policy, future fiscal policy, and whether people change their minds about overall debt repayment. Fiscal policy may not be done with us yet. If unbacked fiscal expansions continue—that is, borrowing when people do not expect additional repayment—then additional bouts of fiscal inflation will occur. Untold trillions of spending, including new entitlements, with no realistic hope of raising tax revenues commensurately to cover them are certainly high on the Biden administration’s agenda. (Higher tax rates do not necessarily mean higher revenues, if economic growth falters; and even so the proposed taxes do not cover the proposed spending increases even with static scoring.) The mentality that borrowed money need never be repaid, because the MMT fairy or r<g magic makes debt free, remains strong in Washington.  But the failure of the so-called “Build Back Better” plan may augur well for budget seriousness and a limit to ill-constructed social policies with strong supply disincentives. The most troublesome question remains: Do people, having decided that at least some of our government’s new debt will not be repaid, so they should spend it now and inflate it away, now think that the government is less likely to repay its existing debts, or less likely to repay future borrowing? If so, even more inflation can break out, seemingly (as always) out of nowhere. Fiscal Constraints on Monetary Policy Fiscal and monetary policies are always intertwined in causing or curing inflation. Even in a pure fiscal theory of the price level, monetary policy (setting interest rates) can control the path of expected future inflation. Thus, whether inflation continues or not also depends on how monetary policy reacts to this fiscal shock and its consequences. Whether the Fed will do something about it is an obvious concern. The Fed’s habits and new operating procedures, formed before 2019 in a Maginot Line against perpetual below-target inflation, look remarkably like the Fed of about 1971: Let inflation blow hot to march down the Phillips curve to greater employment, wait for inflation to exceed target for a while before doing anything about it, talk about “transitory” and “supply” shocks to excuse each error. The Fed understands “expectations” now, unlike in 1971, but seems to view them as a third force amenable to management by “forward guidance” speeches rather than formed by a hardy and skeptical experience with the Fed’s concrete actions. The Fed likes to say it has “the tools” to contain inflation, but never dares to say just what those tools are.  In recent U.S. historical experience, the Fed’s tool is to replay 1980: 20 percent interest rates, a bruising recession hurting the disadvantaged especially, and the medicine applied for as long as it takes. Will our Fed really do that? Will our Congress let our Fed do that? Can you deter an enemy without revealing what’s in your arsenal and whether you will use it? If the Fed needs to fight inflation, fiscal constraints on monetary policy will play a large and unexpected role. In 1980, the debt-to-GDP ratio was 25 percent. Today it is 100 percent, and rising swiftly. Fiscal constraints on monetary policy are four times larger today, and counting. For a rise in interest rates to lower inflation, fiscal policy must tighten as well. Without that fiscal cooperation, monetary policy cannot lower inflation. There are two important channels of this interconnection. First, the rise in interest rates raises interest costs on the debt. The government must  pay those higher interest costs, by raising tax revenues and cutting spending, or by credibly promising to do so in the future. At 100 percent debt to GDP, 5 percentage points higher interest rates mean an additional deficit of 5 percent of GDP or $1 trillion, for every year that high interest rates continue. This consideration is especially relevant if the underlying cause of the inflation is fiscal policy. If we are having inflation because people don’t believe that the government can pay off the deficits it is running to send people checks, and it will not reform the looming larger entitlement promises, then people will not believe that the government can pay off an additional $1 trillion deficit to pay interest costs on the debt.  In a fiscally driven inflation, it can happen that the central bank raises rates to fight inflation, which raises the deficit via interest costs, and thereby only makes inflation worse. This has, for example, been an analysis of several episodes in Brazil. Second, if monetary policy lowers inflation, then bondholders earn a real windfall. Fiscal policy must tighten to pay this windfall.  People who bought 10-year Treasury bonds in September of 1981 got a 15.84 percent yield, as markets expected inflation to continue. From September 1981 to September 1991, the CPI grew at a 3.9 percent average rate. By this back of the envelope calculation, those bondholders got an amazing 12 percent annual real return. That return came completely and entirely courtesy of U.S. taxpayers.  The 1986 tax reform and deregulation, which allowed the United States to grow strongly for 20 years, eventually did produce fiscal surpluses that nearly repaid U.S. federal debt. At 100 percent debt-to-GDP ratio, each 5 percentage point reduction in the price level requires another 5 percent of GDP fiscal surplus. Ask yourself, if inflation gets built into bond yields, and the Fed tries to lower inflation, will our Congress really raise tax revenues or cut spending in order to finance an unexpected (by definition) and undeserved (it will surely be argued) windfall profit to wealthy investors, foreign central bankers, and fat-cats on Wall Street?  If it does not do so, the monetary attempt at disinflation fails. We state too casually that that the United States will always repay its debts, and prioritize that repayment over all else. We should not take such probity for granted. For example, in the 2021 debt ceiling discussion, it stated as fact by all concerned, from the Treasury to Congress to the White House, that hitting the debt ceiling must trigger a formal default. That is untrue. The United States could easily prioritize its tax revenues to repaying interest and principal on outstanding debt, by cutting other spending instead. Painful, yes. Impossible, no. That the U.S. contingency plan for a binding debt ceiling is formal default tells you that the spirit of Alexander Hamilton, preaching the sanctity of debt repayment to build reputation so we can borrow in the future, is truly dead. And with inflation, we are not even talking about formal default. The question is, will the United States undertake a sharp fiscal austerity to support monetary policy in the fight against inflation, by paying higher interest costs on the debt and by repaying bondholders in more valuable money?  Or will the government just repay as promised, but in dollars that are worth more than expected? If the government does the latter, monetary policy fails. There is a third troublesome requirement for higher nominal interest rates to produce lower inflation. One needs an economic model in which this is true, that model needs to be correct, and its preconditions need to be met.  It’s not as easy as it sounds, because in the long run, when real interest rates settle down, higher nominal interest rates must come with higher, not lower inflation. So you need an understanding of how and when things work the other direction in the short run. In standard new-Keynesian models, used by all central banks, for example, higher interest rates only produce lower inflation if the higher interest rate is unexpected—that is, a “shock” to the economy—and if there is a sharp contemporaneous fiscal contraction, for the above reasons.   A widely expected rise in nominal interest rates raises inflation.  A rise in interest rates without the corresponding ``austerity’’ raises inflation.  Both preconditions are questionable today. More complex ingredients, such as long-term debt or financial frictions, can allow a higher nominal rate to temporarily lower inflation. But reliance on more complex ingredients and frictions is also dangerous. The future is not hopeless. Inflation control simply requires our government, including the central bank, to understand classic lessons of history. Forestalling inflation is a joint task of fiscal, monetary, and micro-economic policy. Stabilizing inflation once it gets out of control is a joint task of fiscal, monetary, and micro-economic policy. Expectations are “anchored” if people believe such policy is in place, and politicians and Fed officials are ready to act if needed. I add “micro-economic,” as it is perhaps the most frequently overlooked adjective. Fiscal surpluses do not result from sharply higher tax rates, especially of a tax system so riven with economic distortions as ours. Fiscal surpluses can come from spending restraint, but that too is difficult. The best road to fiscal surpluses is strong economic growth, which increases the tax base and lowers the need for social spending. In the conundrum between taxes and spending, there is a way out: raise long-term economic growth. And there is only one way to do that: to increase the supply-side capacity of the economy. That is, however, just as politically controversial as the first two options. Most of the job is to get out of the way. Most economic regulation is designed to transfer incomes, to protect various interests, or to push on the scales of bilateral negotiation, to undo the harsh siren of economic incentives, in a way that stifles economic growth. Many interests hate pro-growth legislation and regulation just as much as they hate taxes and spending cuts. The r<g crowd has a point, but increasing g is the answer. Much of the “supply shocks” of 2021 come down to the “great resignation”—that is,  the puzzling decline in labor force participation despite a labor shortage. The work disincentives of social programs—paying people not to work, bluntly—are laid bare. All successful inflation stabilizations have combined monetary, fiscal, and micro-economic reforms. I emphasize reforms. In most cases the tax system is reformed to provide more revenue with less distortion. The structure of spending programs is reformed to help people in need more efficiently without work disincentives. Regulations are reformed, though they hurt the profits of incumbents, to increase entry, competition and innovation.  The policy regime is changed, durably. Reversible decisions and pie-crust promises do not do much to change the present value of surpluses, to raise the government’s ability to pledge a long stream of surpluses to support debt. 1980 did not succeed in the United States from monetary toughness only.  1980 included supply-side deregulation, and was quickly followed by the 1982 and 1986 tax reforms. The economy took off, so by the late 1990s economists were seriously writing papers about what to do when the federal debt had all been repaid. Many monetary stabilizations have been tried without fiscal and microeconomic reform. They typically fail after a year or two. The history of Latin America is littered with them (Kehoe and Nicolini 2021).  The high interest rates of the early 1980s likely represented a fear that the United States would suffer the same fate. The 1970s were not just a failure of monetary policy. The deficits of the great society and Vietnam War contributed, while the supply “shocks” and productivity slowdown did their part. These points are especially important if the 2021 inflation turns in to a sustained 2020s inflation, as the 1971 inflation turned in to a sustained 1970s inflation. For this time, the roots of inflation will most likely be fiscal, a broad change of view that our government really will not eventually reform and repay its debt. The only fundamental answer to that question will be, to reform and set in place a durable structure that will repay debt. Monetary machination will be pointless. A small bout of inflation may be useful to our body politic. Inflation is where dreams of costless fiscal expansion, flooding the country with borrowed money to address every perceived problem, hit a hard brick wall of reality.  A small bout of inflation and debt problems may reteach our politicians, officials, and commentariat the classic lessons that  there are fiscal limits, fiscal and monetary policy are intertwined, and that a country with solid long-term institutions can borrow, but a country without them is in trouble, and one must allow the golden goose to thrive if one wants to tax her eggs. A small bout of inflation may reteach the same classes that supply matters, incentives matter, and sand in the gears matter.  The 1980s reforms only happened because the 1970s were so painful. In the meantime, however, there is one technical thing the Fed and Treasury can do to forestall a larger crisis: borrow long.  Interest costs feed into the budget as debt rolls over. U.S. debt is shockingly short maturity, rolled over on average about every two years. If the United States borrows long-term, then higher interest rates do not raise interest costs on existing debt at all. Shifting to long-term debt would remove one of the main fiscal constraints on monetary policy. The Federal Reserve has not helped this fiscal constraint, by transforming a fifth of the federal debt to overnight, floating-rate debt. The 30-year Treasury rate is, as I write 2 percent, about negative 1 percent in real terms. Okay, the 1-year rate is 0.13 percent. As long as this lasts, the government seems to pay lower interest costs.  But a 1.87 percent insurance premium to wipe out the danger of a sovereign debt crisis and to buy huge fiscal space to fight inflation seems like a pretty cheap insurance policy. The window of opportunity will not last long, however, as interest rates are already creeping up. Cochrane, J. H. (2022a) “The Fiscal Theory of the Price Level: An Introduction and Overview.” Manuscript, in preparation for Journal of Economic Perspectives.  Available at www.johnhcochrane.com/research-all/fiscal-theory-jep-article. ____________ (2022b) The Fiscal Theory of the Price Level. Princeton, N.J.: Princeton University Press. Manuscript available until publication at www.johnhcochrane.com/research-all/the-fiscal-theory-of-the-price-level-1. Friedman, M. (1969) “The Optimum Quantity of Money.” In M. Friedman, The Optimum Quantity of Money and Other Essays, 1–50. Chicago: Aldine. Kehoe, T.  J., and Nicolini, J. P. (2021) A Monetary and Fiscal History of Latin America, 1960–2017. Minneapolis: University of Minnesota Press.
2
Advanced Cartography with Stata: OpenStreetMap and QGIS
Member-only story Asjad Naqvi p Follow Published in The Stata Guide p 21 min read p Apr 19, 2021 -- Share In this guide, we will learn how to import OpenStreetMap (OSM) data in Stata via QGIS. This allows us to make detailed choropleth maps from several spatial layers: Follow 1.3K Followers p Editor for The Stata Guide Here you will find cool stuff on Stata and data visualizations. Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
3
Linux Ate My RAM
Linux is borrowing unused memory for disk caching. This makes it look like you are low on memory, but you are not! Everything is fine! Disk caching makes the system much faster and more responsive! There are no downsides, except for confusing newbies. It does not take memory away from applications in any way, ever! If your applications want more memory, they just take back a chunk that the disk cache borrowed. Disk cache can always be given back to applications immediately! You are not low on ram! No, disk caching only borrows the ram that applications don't currently want. It will not use swap. If applications want more memory, they just take it back from the disk cache. They will not start swapping. You can't disable disk caching. The only reason anyone ever wants to disable disk caching is because they think it takes memory away from their applications, which it doesn't! Disk cache makes applications load faster and run smoother, but it NEVER EVER takes memory away from them! Therefore, there's absolutely no reason to disable it! If, however, you find yourself needing to clear some RAM quickly to workaround another issue, like a VM misbehaving, you can force linux to nondestructively drop caches using echo 3 | sudo tee /proc/sys/vm/drop_caches. This is just a difference in terminology. Both you and Linux agree that memory taken by applications is "used", while memory that isn't used for anything is "free". But how do you count memory that is currently used for something, but can still be made available to applications? You might count that memory as "free" and/or "available". Linux instead counts it as "used", but also "available": This "something" is (roughly) what top and free calls "buffers" and "cached". Since your and Linux's terminology differs, you might think you are low on ram when you're not. To see how much ram your applications could use without swapping, run free -m and look at the "available" column: $ free -m total used free shared buff/cache available Mem: 1504 1491 13 0 855 792 Swap: 2047 6 2041 (On installations from before 2016, look at "free" column in the "-/+ buffers/cache" row instead.) This is your answer in MiB. If you just naively look at "used" and "free", you'll think your ram is 99% full when it's really just 47%! For a more detailed and technical description of what Linux counts as "available", see the commit that added the field. A healthy Linux system with more than enough memory will, after running for a while, show the following expected and harmless behavior: Warning signs of a genuine low memory situation that you may want to look into: See this page for more details and how you can experiment with disk cache to show the effects described here. Few things make you appreciate disk caching more than measuring an order-of-magnitude speedup on your own hardware!
2
Covid vaccines cut the risk of transmitting Delta – but not for long
Skip to main content p NEWS 05 October 2021 COVID vaccines cut the risk of transmitting Delta — but not for long People who receive two COVID-19 jabs and later contract the Delta variant are less likely to infect their close contacts than are unvaccinated people with Delta. Smriti Mallapaty 0 Smriti Mallapaty Smriti Mallapaty is a senior reporter in Sydney, Australia. Access options Rent or buy this article Get just this article for as long as you need it $39.95 Prices may be subject to local taxes which are calculated during checkout doi: https://doi.org/10.1038/d41586-021-02689-y References Eyre, D. W. et al. Preprint at medRxiv https://doi.org/10.1101/2021.09.28.21264260 (2021). Brown, C. M. et al. MMWR Morb. Mortal. Wkly. Rep. 70 1059–1062 (2021). Article PubMed Google Scholar Chia, P. Y. et al. Preprint at medRxiv https://doi.org/10.1101/2021.07.28.21261295 (2021). Shamier, M. C. et al. Preprint at medRxiv https://doi.org/10.1101/2021.08.20.21262158 (2021). Download references How do vaccinated people spread Delta? What the science says Delta’s rise is fuelled by rampant spread from people who feel fine COVID vaccines slash viral spread – but Delta is an unknown Novavax offers first evidence that COVID vaccines protect people against variants Subjects SARS-CoV-2 Virology Vaccines Epidemiology Medical research Latest on: SARS-CoV-2 Global plan for dealing with next pandemic just got weaker, critics say News 01 JUN 23 Why is COVID life-threatening for some people? Genetics study offers clues News 17 MAY 23 GWAS and meta-analysis identifies 49 genetic variants underlying critical COVID-19 Article 17 MAY 23 Virology Global plan for dealing with next pandemic just got weaker, critics say News 01 JUN 23 A pan-influenza antibody inhibiting neuraminidase via receptor mimicry Article 31 MAY 23 US will vaccinate birds against avian flu for first time — what researchers think News 26 MAY 23 Vaccines A tool for optimizing messenger RNA sequence News & Views 30 MAY 23 US will vaccinate birds against avian flu for first time — what researchers think News 26 MAY 23 COVID vaccines falter in people with severe obesity Research Highlight 17 MAY 23 Jobs Postdoctoral Fellows/Research scientists Postdoctoral Research Fellow Specialist in Genetic Trials and Precision Medicine Post doctor (2 years) within carbon burial in Arctic lakes Intelligent Network & Big Data Engineer Close banner Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Email address I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Close banner p Sign up for Nature Briefing
2
How to use ES6 proxies to enhance your objects
July 14, 2019 8 min read 2281 One of the aspects of programming I love the most is meta-programming, which references the ability to change the basic building blocks of a language, using that language itself to make the changes. Developers use this technique to enhance the language or even, in some cases, to create new custom languages known as Domain Specific Language (or DSL for short). Many languages already provide deep levels of meta-programming, but JavaScript was missing some key aspects. Yes, it’s true, JavaScript is flexible enough that it allows you to stretch the language quite a bit, considering how you can add attributes to an object during run-time, or how you can easily enhance the behavior of a function by passing it different functions as a parameter. But with all of that, there were still some limits, which the new proxies now allow us to surpass. In this article, I want to cover three things you can do with proxies that will enhance your objects specifically. Hopefully, by the end of it, you’ll be able to expand my code and maybe apply it yourself to your own needs! Proxies basically wrap your objects or functions around a set of traps, and once those traps are triggered, your code gets executed. Simple, right? The traps we can play around with are: Those are the standard traps, you’re more than welcome to check out Mozilla’s Web Docs for more details on each and every one of them since I’ll be focusing on a subset of those for this article. That being said, the way you create a new proxy or, in other words, the way you wrap your objects or function calls with a proxy, looks something like this: let myString = new String("hi there!")let myProxiedVar = new Proxy(myString, { has: function(target, key) { return target.indexOf(key) != -1; }})console.log("i" in myString)// falseconsole.log("i" in myProxiedVar)//true That’s the basis of a proxy, I’ll be showing more complex examples in a second, but they’re all based on the same syntax. But before we start looking at the examples, I wanted to quickly cover this question, since it’s one that gets asked a lot. With ES6 we didn’t just get proxies, we also got the Reflect object, which at first glance, does exactly the same thing, doesn’t it? The main confusion comes because most documentation out there, states that Reflect has the same methods as the proxy handlers we saw above (i.e the traps). And although that is true, there is a 1:1 relationship there, the behavior of the Reflect object and its methods are more alike to that of the Object global object. For example, the following code: const object1 = { x: 1, y: 2};console.log(Reflect.get(object1, 'x')); Will return a 1, just as if you would’ve directly tried to access the property. So instead of changing the expected behavior, you can just execute it with a different (and in some cases, more dynamic) syntax. Let’s now look at some examples. To start things off, I want to show you how you can provide extra functionality to the action of retrieving a property’s value. What I mean by that is, assuming you have an object such as: class User { constructor(fname, lname) { this.firstname = fname this.lastname = lname }} You can easily get the first name, or the last name, but you can’t simply request the full name all at once. Or if you wanted to get the name in all caps, you’d have to chain method calls. This is by no means, a problem, that’s how you’d do it in JavaScript: let u = new User("fernando", "doglio")console.log(u.firstname + " " + u.lastname)//would yield: fernando doglioconsole.log(u.firstname.toUpperCase())//would yield: FERNANDO But with proxies, there is a way to make your code more declarative. Think about it, what if you could have your objects support statements such as: let u = new User("fernando", "doglio")console.log(u.firstnameAndlastname)//would yield: fernando doglioconsole.log(u.firstnameInUpperCase)//would yield: FERNANDO Of course, the idea would be to add this generic behavior to any type of object, avoiding manually creating the extra properties and polluting the namespace of your objects. This is where proxies come into play, if we wrap our objects and set a trap for the action of getting the value of a property, we can intercept the name of the property and interpret it to get the wanted behavior. Here is the code that can let us do just that: function EnhanceGet(obj) { return new Proxy(obj, { get(target, prop, receiver) { if(target.hasOwnProperty(prop)) { return target[prop] } let regExp = /([a-z0-9]+)InUpperCase/gi let propMatched = regExp.exec(prop) if(propMatched) { return target[propMatched[1]].toUpperCase() } let ANDRegExp = /([a-z0-9]+)And([a-z0-9]+)/gi let propsMatched = ANDRegExp.exec(prop) if(propsMatched) { return [target[propsMatched[1]], target[propsMatched[2]]].join(" ") } return "not found" } });} We’re basically setting up a proxy for the get trap, and using regular expressions to parse the property names. Although we’re first checking if the name actually meets a real property and if that’s the case, we just return it. Then, we check for the matches on the regular expressions, capturing, of course, the actual name in order to get that value from the object to then further process it. Now you can use that proxy with any object of your own, and the property getter will be enhanced! Over 200k developers use LogRocket to create better digital experiences Learn more → Next, we have another small but interesting enhancement. Whenever you try to access a property that doesn’t exist on an object, you don’t really get an error, JavaScript is permissive like that. All you get is undefined returned instead of its value. What if, instead of getting that behavior, we wanted to customize the returned value, or even throw an exception since the developer is trying to access a non-existing property. We could very well use proxies for this, here is how: function CustomErrorMsg(obj) { return new Proxy(obj, { get(target, prop, receiver) { if(target.hasOwnProperty(prop)) { return target[prop] } return new Error("Sorry bub, I don't know what a '" + prop + "' is...") } });} Now, that code will cause the following behavior: > pa = CustomErrorMsg(a)> console.log(pa.prop)Error: Sorry bub, I don't know what a 'prop' is... at Object.get (repl:7:14) at repl:1:16 at Script.runInThisContext (vm.js:91:20) at REPLServer.defaultEval (repl.js:317:29) at bound (domain.js:396:14) at REPLServer.runBound [as eval] (domain.js:409:12) at REPLServer.onLine (repl.js:615:10) at REPLServer.emit (events.js:187:15) at REPLServer.EventEmitter.emit (domain.js:442:20) at REPLServer.Interface._onLine (readline.js:290:10) We could be more extreme like I mentioned, and do something like: function HardErrorMsg(obj) { return new Proxy(obj, { get(target, prop, receiver) { if(target.hasOwnProperty(prop)) { return target[prop] } throw new Error("Sorry bub, I don't know what a '" + prop + "' is...") } });} And now we’re forcing developers to be more mindful when using your objects: > a = {}> pa2 = HardErrorMsg(a)> try {... console.log(pa2.property) } catch(e) {... console.log("ERROR Accessing property: ", e) }ERROR Accessing property: Error: Sorry bub, I don't know what a 'property' is... at Object.get (repl:7:13) at repl:2:17 at Script.runInThisContext (vm.js:91:20) at REPLServer.defaultEval (repl.js:317:29) at bound (domain.js:396:14) at REPLServer.runBound [as eval] (domain.js:409:12) at REPLServer.onLine (repl.js:615:10) at REPLServer.emit (events.js:187:15) at REPLServer.EventEmitter.emit (domain.js:442:20) at REPLServer.Interface._onLine (readline.js:290:10) Heck, using proxies you could very well add validations to your sets, making sure you’re assigning the right data type to your properties. There is a lot you can do, using the basic behavior shown above in order to mold JavaScript to your particular desire. The last example I want to cover is similar to the first one. Whether before we were able to add extra functionality by using the property name to chain extra behavior (like with the “InUpperCase” ending), now I want to do the same for method calls. This would allow us to not only extend the behavior of basic methods just by adding extra bits to its name, but also receive parameters associated with those extra bits. Let me give you an example of what I mean: myDbModel.findById(2, (err, model) => { //....}) That code should be familiar to you if you’ve used a database ORM in the past (such as Sequelize or Mongoose, for example). The framework is capable of guessing what your ID field called, based on the way you set up your models. But what if you wanted to extend that into something like: myDbModel.findByIdAndYear(2, 2019, (err, model) => { //...}) And take it a step further: myModel.findByNameAndCityAndCountryId("Fernando", "La Paz", "UY", (err, model) => { //...}) We can use proxies to enhance our objects into allowing for such behavior, allowing us to provide extended functionality without having to manually add these methods. Besides, if your DB models are complex enough, all the possible combinations become too much to add, even programmatically, our objects would end up with too many methods that we’re just not using. This way we’re making sure we only have one, catch-all method that takes care of all combinations. In the example, I’m going to be creating a fake MySQL model, simply using a custom class, to keep things simple: var mysql = require('mysql');var connection = mysql.createConnection({ host : 'localhost', user : 'user', password : 'pwd', database : 'test'}); connection.connect();class UserModel { constructor(c) { this.table = "users" this.conn = c }} The properties on the constructor are only for internal use, the table could have all the columns you’d like, it makes no difference. More great articles from LogRocket: Don't miss a moment with The Replay, a curated newsletter from LogRocket Learn how LogRocket's Galileo cuts through the noise to proactively resolve issues in your app Use React's useEffect to optimize your application's performance Switch between multiple versions of Node Discover how to animate your React app with AnimXYZ Explore Tauri, a new framework for building binaries Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag. let Enhacer = { get : function(target, prop, receiver) { let regExp = /findBy((?:And)?[a-zA-Z_0-9]+)/g return function() { // let condition = regExp.exec(prop) if(condition) { let props = condition[1].split("And") let query = "SELECT * FROM " + target.table + " where " + props.map( (p, idx) => { let r = p + " = '" + arguments[idx] + "'" return r }).join(" AND ") return target.conn.query(query, arguments[arguments.length - 1]) } } }} Now that’s just the handler, I’ll show you how to use it in a second, but first a couple of points: That’s it, the TL;DR of the above would be: we’re transforming the method’s name into a SQL query and executing it using the actual query method. Here is how you’d use the above code: let eModel = new Proxy(new UserModel(connection), Enhacer) //create the proxy hereeModel.findById("1", function(err, results) { //simple method call with a single parameter console.log(err) console.log(results)})eModel.findByNameAndId('Fernando Doglio', 1, function(err, results) { //extra parameter added console.log(err) console.log(results) console.log(results[0].name)}) That is it, after that the results are used like you would, nothing extra is required. That would be the end of this article, hopefully, it helped clear out a bit of the confusion behind proxies and what you can do with them. Now let your imagination run wild and use them to create your own version of JavaScript! See you on the next one! Get setup with LogRocket's modern error tracking in minutes: Visit https://logrocket.com/signup/ to get an app ID.Install LogRocket via NPM or script tag. LogRocket.init() must be called client-side, not server-side. $ p i p p p p p p p ; p . p ( p ); Add to your HTML : < p p = p p < / p p < p p p . p p p . p . p ( p p ' p p p p (Optional) Install plugins for deeper integrations with your stack: Redux middlewarengrx middlewareVuex plugin Share this:
61
Practical machine learning to estimate traffic flow in San Juan, Puerto Rico
Getting started with out a pile of data can make building models difficult. We needed a nice way to collect some data from the traffic here in San Juan and found DTOP has some strategically located webcams strung along the main highway cutting through the city. Great! Except this isn't particularly useful and while manually cleaning data is always on the table, it's not something I want to do for each new image coming through the feed. yolov5 to the rescue! YoloV5 is an object detection model `ultralytics/yolov5` that is impressively accurate out of the box with identifying things we care about. What do we care about? Cars, buses and trucks in particular. So let's build a model and start feeding it input from our webcams around the city. We're using PyTorch here, this approach is fairly agnostic, but we like pytorch. import json from PIL import Image import numpy as np import torch import time device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") #load up the model model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) model.to(device) # Our 7 cameras to care about at the moment sj1 = ("http://its.dtop.gov.pr/images/cameras/26-0.1_01_MD-IPV.jpg", "http://its.dtop.gov.pr/en/TrafficImage.aspx?id=119&Large=1", 10, 25, 18.458339088897567, -66.08570387019088) sj2 = ("http://its.dtop.gov.pr/images/cameras/26-1.1_03_MD-IPV.jpg", "http://its.dtop.gov.pr/en/TrafficImage.aspx?id=121&Large=1", 10, 25, 18.454611048837556, -66.07684808241595) sj3 = ("http://its.dtop.gov.pr/images/cameras/26-2.1_04_WB-IPV.jpg", "http://its.dtop.gov.pr/en/TrafficImage.aspx?id=143&Large=1", 10, 25, 18.451301357824246, -66.0680360794367) sj4 = ("http://its.dtop.gov.pr/images/cameras/26-3.0_05_MD-IPV.jpg", "http://its.dtop.gov.pr/en/TrafficImage.aspx?id=123&Large=1", 20, 35, 18.44865740439453, -66.06059797491021) sj5 = ("http://its.dtop.gov.pr/images/cameras/26-5.7_07_WB-IPV.jpg", "http://its.dtop.gov.pr/en/TrafficImage.aspx?id=125&Large=1", 10, 25, 18.44634309886566, -66.03470036662318) sj6 = ("http://its.dtop.gov.pr/images/cameras/26-6.5_08_WB-IPV.jpg", "http://its.dtop.gov.pr/en/TrafficImage.aspx?id=126&Large=1", 20, 30, 18.443494845651536, -66.02780514096465) sj7 = ("http://its.dtop.gov.pr/images/cameras/CCTV_Minillas_PR-22.jpg", "http://its.dtop.gov.pr/en/TrafficImage.aspx?id=57&Large=1", 16, 25, 18.44818055202958, -66.06798119910 sjOriRoutes = [sj1, sj2, sj3, sj4, sj5, sj6, class NpEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, np.integer): return int(obj) if isinstance(obj, np.floating): return float(obj) if isinstance(obj, np.ndarray): return obj.tolist() return super(NpEncoder, self).default( def printTraffic(carCount, route): if carCount == 0: return "no traffic" if carCount < route[2]: return "low traffic" if carCount < route[3]: return "medium traffic" return "high traf starttime = time.ti # every 60 seconds let's go grab the next batch of cam images # ask our model for detections # count them # save our file and results for an app to grab while True: finalResult = [] for route in sjOriRoutes: results = model(route[0]) counts = results.pandas().xyxy[0].name.value_counts() vehicleCount = 0 if "car" in counts: vehicleCount += counts["car"] if "bus" in counts: vehicleCount += counts["bus"] if "truck" in counts: vehicleCount += counts["truck"] res = printTraffic(vehicleCount, route) results.render() fileName = "static/img/"+route[0].split('/')[-1] finalResult.append( (fileName, route[0], route[1], route[2], route[3], vehicleCount, res, route[4], route[5])) im = Image.fromarray(results.imgs[0]) im.save(fileName) with open('latest.json', 'w') as outfile: json.dump(finalResult, outfile, cls=NpEncoder) time.sleep(60) So we have a fairly intuitive way of grabbing vehicles, counting them out and with some arbitrary numbers, decide if it's no/low/medium or high traffic in the image. However it didn't take long to realize webcam quaility isn't great and the object detection fails to detect all the cars all the time, leaving us with a full freeway being marked as medium or even low traffic. This wasn't the worst offenses though, at night time we realized a packed rush hour ride home was failing to find a single car in the sea of brake lights, so grid lock traffic was decidedly "No traffic".... Your human brain when viewing the webcam frames doesn't spend it's time counting vehicles and determining a specific number you'd deem as "high" traffic or "low" traffic, so while it works to an impressive degree as a computer it still falls a bit short of great. So how do we do this? Well, perhaps when we were younger it was something closer to counting cars or hearing screams of pain being stuck in heavy traffic that allowed you to correlate the scenerary and level of cars around you with being low/medium/high traffic conditions without you really knowing it, but as we get older we are able to glance at these images of traffic and instantly give someone a reasonable response to what the flow of traffic looks like. We'd like our model to have a similar level of understanding based on entire images versus spending it's time counting the number of cars that are on the road and having to have a human give a fairly arbitrary number to make it's determinations. Where are we? We now have a semi-accurate way of categorizing images from the webcam and a good idea of the circumstances we are failing to properly detect traffic, and so away we go to write a python script to begin storing all the images that we've retrieved into nicely categorized subfolders based on our yolov5 detection results as they'd be displayed in our application currently. We're going to go back through these afterwards and manually move around the ones we got wrong into their correct subfolders. With a few tweaks to the original code we were using, we're able to get this going quickly. Running that for a period of time to collect lots of samples from our webcam at different times throughout the day was a great success. A few thousand images produced for each of our categories, despite some of them being wildly wrong we're still optimistic. Opening my results folder in one monitor, a specific subfolder in another with an image viewer I prepared myself for the next tedious hour-or-so of my life.A few hours of moving images from their original folders into their respective sub-folder by category based on my human understanding of some nuances, I felt confident that most of my training images were correctly labeled. (Perfect is the enemy of progress) So I now wanted to go about building a simple neural network with pytorch that is trained exclusively on my categorized images of the webcams here in puerto rico and see if I was able to meet or exceed the first attempt with using raw object detection counts via yolov5. Easy enough. Training the model for a long while on my GPU resulted in a 'best.model' file saved for us to use in our application. Let's go ahead and update our original application code that was reliant on our yolov5 model counting vehicles to come up with useful data. And now as you can find at San Juan Puerto Rico we're serving our webcam stills with an educated guess on the flow of traffic on the highways using the DTOP webcams here in Puerto Rico. Once we confirm our results are on par or better than the original object detection approach, we will substitute yolov5 with our new model in our python script that collects and organizes the training data. The second most exciting part is watching the training data become less and less messy and subsequently getting us better models with less work.So what's next? We're hoping to get a minimalist training pipeline to have our model become a nightly build based on the training data, if we find our model being more performant than previously, we'll save it out to our file system for our python script to grab and start doing a better job categorizing training data, as well as update our applications models to start using the latest and smartest model available.
7
Facebook Just Suspended the Accounts of Some of Its Biggest Critics
CEO of Facebook Mark Zuckerberg walks to lunch following a session at the Allen & Company Sun Valley Conference on July 08, 2021 in Sun Valley, Idaho. (Kevin Dietsch/Getty Images) Unraveling viral disinformation and explaining where it came from, the harm it's causing, and what we should do about it. See More → Want the best of VICE News straight to your inbox? Sign up here. Facebook has made good on its threat to kick out a group of researchers who’ve been among the platform’s biggest critics. The Cybersecurity for Democracy project at New York University has revealed major flaws in Facebook political ad transparency tools and highlighted how Facebook’s algorithms were amplifying misinformation. Most recently, it helped track vaccine disinformation in coordination with the Virality Project, a group that tries to neutralize false narratives spreading on social media. Advertisement Despite the obvious benefits of the work being done by these researchers, on Tuesday evening, the company cut the cord. “This evening, Facebook suspended my Facebook account and the accounts of several people associated with Cybersecurity for Democracy, our team at NYU,” Laura Edelson, one of the researchers at NYU, tweeted. “This has the effect of cutting off our access to Facebook's Ad Library data, as well as CrowdTangle.” Edelson’s colleague Damon McCoy called Facebook’s decision “disgraceful” at a time when the disinformation around COVID-19 and vaccines is literally costing lives. “It is disgraceful that Facebook is attempting to quash legitimate research that is informing the public about disinformation on their platform,” McCoy said in a statement shared by Edelson. “With its platform awash in vaccine disinformation and partisan campaigns to manipulate the public, Facebook should be welcoming independent research, not shutting it down.” Edelson said in an emailed statement sent to VICE News that the decision to suspend the accounts happened “hours after she had informed the platform that she and McCoy were studying the spread of disinformation about January 6 on the social media platform.” Facebook told VICE News that “any insinuation that this was an abrupt removal of access or retaliation does not comport with reality.” It didn’t respond to follow up questions about when the decision was taken. Advertisement Facebook’s decision was also slammed by Sen. Amy Klobuchar, who has in the past questioned CEO Mark Zuckerberg about the decision to threaten the NYU researchers. “As we face threats to our democracy, we need more transparency from online platforms, not less,”Klobuchar said.  “That is why I am deeply troubled by the news that Facebook is cutting off researcher access to political advertising data, which has shown that the company continues to sell millions of dollars’ worth of political ads without proper disclosures.” While the shutdown was unexpected, Facebook’s ire about a tool the researchers created dates back over a year. The tool is a browser extension called Ad Observer, which users voluntarily download. Users give the extension access to their personal Facebook pages in order to collect anonymized data about the ads they’re seeing. That information then goes into a public database, where journalists and researchers can see how and where politicians are focusing their ad spend. Facebook felt Ad Observer was a breach of users’ privacy and issued the researchers a warning in a meeting last summer, before the tool was even launched. In October, two weeks before the presidential election, Facebook sent a cease-and-desist letter, giving them 45 days to shut it down. That deadline passed at the end of November, and at the time it looked like Facebook had relented and allowed the tool to remain in place. The media coverage of Facebook’s letter had also been a boon for the project: the number of people who consented to share their data via Ad Observer doubled to over 16,000 people in the space of a few weeks. Advertisement Since last November, the two sides have been trying to come to a formal agreement, but on Tuesday those negotiations ended abruptly. “NYU’s Ad Observatory project studied political ads using unauthorized means to access and collect data from Facebook, in violation of our Terms of Service,” Mike Clark, Facebook’s product management director, wrote in a blog post. “We took these actions to stop unauthorized scraping and protect people’s privacy in line with our privacy program under the [Federal Trade Commission] Order.” But the researchers involved, and other experts in the field, believe that Facebook is simply using the regulatory body as an excuse to eradicate independent researchers who have repeatedly highlighted significant flaws in the platform. Johnathon Mayer, an assistant professor of computer science and public affairs at Princeton University, said Facebook’s claim that it is required to take this action because of rules imposed by the FTC is “bogus.” “There is, of course, no requirement that Facebook prohibit independent accountability research and journalism in its terms,” Mayer tweeted. “This is a classic blame-the-regulator dodge.” And as Axel Burns, a social media researcher at Queensland University of Technology pointed out on Twitter, “this research is necessary, of course, because the tools Facebook provides are so inadequate — the Ad Library is severely limited and (as a single point of data access operated by Facebook) can't be trusted or verified to provide all the relevant data.” Advertisement Jason Kint, CEO of Digital Content Next, a nonprofit trade association for the digital content industry, said that Facebook has a history of suspending accounts of people who criticize it: It suspended the account of Cambridge Analytica whistleblower Christopher Wylie. Kint added this latest suspension will have a “chilling” effect on the wider research community. “​​It’s outrageous they would do this,” Kint told VICE News. “Facebook has used access to their platforms as punitive damage in the past. This isn’t only about Facebook not wanting to be held accountable but chilling all others who expose the harms in Facebook’s business practices. Some experts have also contrasted the suspension of the NYU researchers’ accounts to the lack of action Facebook took against Clearview AI, a company that scraped hundreds of millions of images from the platform. Those same experts pointed out that Clearview AI is backed by Peter Thiel, one of Facebook’s biggest and earliest investors, who is also a member of the company’s board. “Outside analysis of Facebook content from essential organizations like the Ad Observatory are increasingly exposing Facebook as a breeding ground for extremism and right wing trash,” a spokesperson for the Real Facebook Oversight Board, an activist group established to counter the company’s own Oversight Board, and of which Edelson is a member, told VICE News. “Now like the authoritarian governments they court, Facebook is cracking down on its critics.” Several others have pointed out the inherent irony in Facebook, a company that bulk collects information about users’ activity—both on and off its platform—criticizing researchers who are attempting to better understand disinformation and nefarious political advertising tactics on Facebook. “Allowing Facebook to dictate who can investigate what is occurring on its platform is not in the public interest,” McCoy said. “Facebook should not be able to cynically invoke user privacy to shut down research that puts them in an unflattering light, particularly when the ‘users’ Facebook is talking about are advertisers who have consented to making their ads public.”
1
DevOps Via Negativa
I have finally finished reading “Antifragile” by Nassim Taleb and while this is a great book in general, a lot of things about Via Negativa resonated with me in regards to the DevOps field specifically. In short, Via Negativa is a way to achieve things by not doing something, rather than by doing. Below are my thoughts how this principle could be applied to DevOps: Pick a lighter solution if it fits the needs. In example, for a development cluster we frequently recommend a single K3s VM instead of something much larger like Elastic Kubernetes Service. Novelty Budget (sometimes called Innovation Budget) – this is where you put a hard limit on the amount of new technology that may be introduced in a specific time period. Minimize the number of moving pieces in the stack. Yes, you can put 5 proxies – one on top of each other – but better not. Frankly, I frequently feel that untangling all the hoops of over-engineering takes more time than developing a solution in the first place. This is a sentiment that is often shared in the industry in general – when developers prefer to just rewrite the whole project from the scratch, rather than trying to update what they have. Minimize the number of scheduled meetings. Tons of material were written on this subject. In short, my point here goes as following: all meetings should be driven by the actual need to discuss something and then by the shape of the project. If there is a major issue with the project then a daily status meeting may be warranted, otherwise once a week all-team status meeting should be more than enough. Remote work all the way – cut the needless routine and commute to get to the office. I was talking about this before Covid, and what surprises me now is when some organizations still try to get everyone back to office. Seriously? Now, when we have an undeniable proof that remote work is as productive as onsite, somebody is still trying to find a way back. Hopefully, the strained job market will correct this for good. The above is by no means an exhaustive list. Likely many more ideas and examples of what we can avoid doing to achieve better results may be added. But I hope this list illustrates the Via Negativa principle in DevOps well.
1
Webster’s Dictionary Defines “View Source” As
I wrote previously about the spirit of view source but didn’t get out everything I had to say. Since this is my blog and I can say what I want, here’s some more observations about view source. There’s been a debate the last few years whether keeping “View Source” as a feature remains relevant. Tom Dale, Chris Coyier, Jonathan Snook, and Chris Heilmann have all weighed in on the matter. I’m not here to add my opinion, but I do want to make a couple observations. First, when we talk about “View Source” what precisely are we talking about? I think this is an important point to clarify, as it sometimes goes unsaid and therefore a lot of assumptions sneak into the conversation and we might realize we’re not all talking about the same thing. In this post, I specifically want to talk about the idea of viewing the source HTML of any given website. But what does that even mean? Referring to HTML specifically, if someone says they want to “view the source” of a web page, what are they talking about? Conceivably, they could be talking about one of three things: I doubt many sites on the web today are authored in HTML. The HTML that gets delivered to the browser over the network is likely the result of a computer somewhere stitching together raw content with coded templates, then minifying it all for optimal delivery over the network (in the case of SPAs, some of this process takes place on the client). When someone says they want to view the source of a webpage, they could mean they want to view the source code that generates the HTML that gets delivered to the browser at request time (or in the case of SPAs, the source code that generates the HTML that gets injected into the DOM at runtime). This is the early days of the web interpretation of “View Source”. I call it “View Page Source” because that’s how most browsers have implemented this feature (a topic I’ve written about previously). It refers to the ability to view the original, unformatted HTML as delivered to the browser over the network at request time. This may or may not be the format it was authored in (see point above). Before JavaScript runs, before CSS paints, this is what the browser is looking at. In the beginning was HTML. I call this one “View Runtime Source” because what you’re really trying to do is view the HTML that is powering the website that you’re looking at in its current state. In other words, view the DOM (accessible via the DevTools). It’s worth noting that this representation of the page’s HTML could be the same, slightly different, or drastically different than the code you see when you “View Page Source” (i.e. the HTML delivered over the network at request time). “View Runtime Source” is a bit of a Schrödinger’s cat scenario: what the HTML for a given web page looks like is dependent upon the moment in time in which you look at it. I wanted to try comparing and contrasting two different perspectives of HTML: “View Page Source" (the HTML at delivery time over the network) and “View Runtime Source” (the HTML after page load, upon time of inspection). I chose two “pages” that I guessed were probably quite different from each other: a page on CSS-Tricks and the Twitter home feed and diffed their “page source” and “runtime source”. I used the following, completely non-scientific yet still interesting approach: Once I had the before and after HTML files, I put them a diffing tool and started reading. As I parsed the differences, I took the notes you see below. The “before” HTML I looked at was the raw HTML sent over the wire. The “after” HTML I looked at was a stringification of the DOM after a couple seconds of first requesting the URL. The string is a representation of what the browser decides to output if you click “Edit as HTML” on the root DOM node and then copy it. In other words, it’s the browser’s version of saying “you input HTML, I parsed it, executed relevant JavaScript, and now have this HTML representation.” Because of this, there were diffs in the HTML that represented side effects of the approach and not necessarily intentional HTML differences. For example: the browser might read in decimal code values (i.e. &#38; ) in the “View Page Source” version of the HTML, but then convert them to an HTML entity when you “stringify the DOM” (i.e. &amp;). A lot of these kinds of differences showed up in my diffs and I may mention some of them but not all of them. As another example: all single quotes in the “View Page Source” HTML were converted to double quotes—for example on attributes, type='text' was turned in to type="text". One more example: some tags were converted from self-closing tags as to opening/closing tags, i.e. <meta /> and <link /> tags were changed to <meta> and <link>. It is worth noting that I saw this same phenomenon in SVG, i.e. <path /> gets changed to <path></path> by the browser. Apparently there are some rules around this when writing SVG in HTML—at least according to StackOverflow: You can write a path as <path></path> or <path/> but you can't write it as <path> And in HTML as well—according to StackOverflow Using the self-closing syntax in HTML5 is valid (but only for void elements, such as <br/>; or foreign elements, which are those from MathML and SVG). While interesting, I omitted most observations on details like this because they pertain more to implementation details of the browser’s choices on parsing HTML (and my general approach to this experiment) and less to what the developer’s intentions were in manipulating the source HTML delivered over the network upon page load. Source page: “Using WebP Images” on CSS-Tricks. You can see an image of the source diff I parsed through. What follows are the primary differences I observed around how the source HTML was manipulated by the execution of JavaScript upon page load. Pretty standard stuff: adding and removing classes at the document level. Likely used for styling things based on whether JS is present. Sometimes you can tell just by the name of the class. You can see it happened with the <body> tag as well. woocommerce-no-js to woocommerce-js. Why not add all classes to the html or body tag only? I’m going to guess these are third-party JavaScript instructions used as plugins on the page, so there’s no real concerted effort by a single team to be holistic about the approach. As you may have noticed from the above screenshot, other stuff changed right after the opening <body> tag. “Directly after the <body> tag” is a pretty standard place JavaScript likes to stick stuff. If you try to parse through it, it looks like some custom <style>s from a plugin (#fit-vids-style) as well as some ad injection from buysellads.com. Again, likely third-party JavaScript written by independent teams staking their claim in the DOM as plugins. Google Analytics getting its place in here. Based on its position in the diff, I’m going to guess there’s some greedy logic behind its particular placement (an “I want to be first” attitude). I’m going to guess the logic is something like “find the first <script> element on the page then inject Google Analytics right before that.” Some ads getting injected into their placeholders that shipped in the HTML that came over the network. For what it’s worth, there were a number of “placeholder <divs for ad injection” throughout the diff. This was an interesting diff I stumbled on: It looks like some JavaScript is going through the document and dynamically adding anchor links to each of the headings in the post content. I’ve always approached this by having the build process for my content parse the headings, turn them into slugs (i.e. “This is My Heading” → “this-is-my-heading”), and turn them into anchor-able links. This approach is different in two ways. First, the anchor-ifying of headings appears to take place on the client, not the server. I’m not sure what the advantage of this might be? Could be a constraint of the way the page is authored and then put together and served? Even then, because this is client side, anchoring directly via the URL with an id would break the default browser behavior of moving you down the page to that element because the id in the URL wouldn’t be found in the document at load time. It’s only after JavaScript has been parsed and executed that the id gets injected in the page. So whatever JavaScript is doing this anchor-ifying of headings must also be jumping you down the page as well. Secondly, the id attributes in this case seem to be agnostic of their content. It looks to be dynamically adding an incremented counter to each headings as it goes down the page. I’m unsure of the why behind this approach as well. My best guess would be: headings are probably more likely to change in wording after publication but less likely to change in position, so content-agnostic headings result in fewer broken anchor links over time? I don’t know, that’s my guess. Either way, this approach made me pause and think about how I generally approach this problem, which was good. That’s kind of the point of this whole exercise. This kind of thing happened a lot in the post content area. Seems pretty standard. This particular implementation looks like a third-party plugin that’s handling it all. One day we’ll get solid support across browsers for native lazy loading and this kind of thing will (hopefully) become obsolete. It looks like syntax highlighting is happening on the client, rather than being delivered in the initial HTML. I thought this was interesting, especially since I've always done syntax highlighting on the server in my build process, such that the markup to support syntax highlighting gets delivered in the initial HTML. I’d never really considered this, but I suppose you could argue that doing syntax highlighting on the client is a “progressive enhancement”? Let me think out loud for a minute: the source code delivered in the initial HTML is actually more readable when it’s source code devoid of all the <span>s that wrap the individual pieces of code purely for stylistic purposes. In other words, the pure code in HTML (devoid of all markup required for syntax highlighting) is more human readable. For example, just look at this snippet. If JavaScript didn’t load for this page, this code markup would remain incredibly readable—more so than if you wrapped it in the markup required for syntax highlighting on the server and delivered it in the initial HTML to the wide variety of clients that might request it. As previously noted, I’ve always baked the syntax highlighting markup into the server-side HTML, but this approach has made me reconsider that—at least for a moment. Again, more reason to read other people’s code. Related posts appear to get loaded dynamically by JavaScript (into a placeholder <div> in the original HTML). Makes sense. They’re not required to understand the current document, so “enhancing“ the page with them seems like a good approach. Personally, I’ve found myself doing this on some projects as well. Being able to have a single index you updated periodically and then fetch dynamically on the client to generate a list of related posts often makes more sense than having to regenerate the “related posts” for every single page you’ve ever made all the time. Now let’s turn and look at a totally different way of doing a web page. For this example, I logged in to Twitter and looked at the source for my feed. The only major diff on this site was, well, everything. There was a single root node where the entire content of the document got injected. In contrast to the css-tricks example above, this is obviously a very different way of approaching building for the web. “View Page Source” becomes less relevant in this world because no functional content is shipped in the HTML over the wire. All relevant content comes in after the initial page load and therefore will only be visible in the HTML at the time of “View Runtime Source”. For kicks and giggles, I actually measured the diff and here’s what it turned out to be: As noted in the image, you can see that the diff screenshot from HTML over the wire (View Page Source) vs. HTML at runtime (View Runtime Source) is ~24,000 pixels tall. It was a lot of code injected after the initial page load. One small thing I learned from Twitter’s HTML over the wire was this thing called nonce. StackOverflow the nonce attribute is way of telling browsers that the inline contents of a particular script or style element were not injected into the document by some (malicious) third party, but were instead put into the document intentionally by whoever controls the server the document is served from. nonce is a way of indicating that a particular tag came over the wire and wasn’t injected at runtime. Again, it’s useful to make a distinction between what kind of “source” you're looking at when you read an HTML document. We often talk about the different approaches to building for the web: serving an entire page at request time and enhancing with JavaScript vs. serving only a bare shell and injecting all content at runtime via JavaScript. But it was an interesting experiment for me to actually dig into the actual implementation details of what that means in the light of “View Source”. I think it’s worth stating once again that when we have conversations about the relevancy of “View Source”, we should define exactly what we mean by the term because viewing the HTML over the wire isn’t the same thing as viewing the DOM in the DevTools.
2
The Friendship That Made Google (2018)
One day in March of 2000, six of Google’s best engineers gathered in a makeshift war room. The company was in the midst of an unprecedented emergency. In October, its core systems, which crawled the Web to build an “index” of it, had stopped working. Although users could still type in queries at google.com, the results they received were five months out of date. More was at stake than the engineers realized. Google’s co-founders, Larry Page and Sergey Brin, were negotiating a deal to power a search engine for Yahoo, and they’d promised to deliver an index ten times bigger than the one they had at the time—one capable of keeping up with the World Wide Web, which had doubled in size the previous year. If they failed, google.com would remain a time capsule, the Yahoo deal would likely collapse, and the company would risk burning through its funding into oblivion. In a conference room by a set of stairs, the engineers laid doors across sawhorses and set up their computers. Craig Silverstein, a twenty-seven-year-old with a small frame and a high voice, sat by the far wall. Silverstein was Google’s first employee: he’d joined the company when its offices were in Brin’s living room and had rewritten much of its code himself. After four days and nights, he and a Romanian systems engineer named Bogdan Cocosel had got nowhere. “None of the analysis we were doing made any sense,” Silverstein recalled. “Everything was broken, and we didn’t know why.” Silverstein had barely registered the presence, over his left shoulder, of Sanjay Ghemawat, a quiet thirty-three-year-old M.I.T. graduate with thick eyebrows and black hair graying at the temples. Sanjay had joined the company only a few months earlier, in December. He’d followed a colleague of his—a rangy, energetic thirty-one-year-old named Jeff Dean—from Digital Equipment Corporation. Jeff had left D.E.C. ten months before Sanjay. They were unusually close, and preferred to write code jointly. In the war room, Jeff rolled his chair over to Sanjay’s desk, leaving his own empty. Sanjay worked the keyboard while Jeff reclined beside him, correcting and cajoling like a producer in a news anchor’s ear. Jeff and Sanjay began poring over the stalled index. They discovered that some words were missing—they’d search for “mailbox” and get no results—and that others were listed out of order. For days, they looked for flaws in the code, immersing themselves in its logic. Section by section, everything checked out. They couldn’t find the bug. Programmers sometimes conceptualize their software as a structure of layers ranging from the user interface, at the top, down through increasingly fundamental strata. To venture into the bottom of this structure, where the software meets the hardware, is to turn away from the Platonic order of code and toward the elemental universe of electricity and silicon on which it depends. On their fifth day in the war room, Jeff and Sanjay began to suspect that the problem they were looking for was not logical but physical. They converted the jumbled index file to its rawest form of representation: binary code. They wanted to see what their machines were seeing. On Sanjay’s monitor, a thick column of 1s and 0s appeared, each row representing an indexed word. Sanjay pointed: a digit that should have been a 0 was a 1. When Jeff and Sanjay put all the missorted words together, they saw a pattern—the same sort of glitch in every word. Their machines’ memory chips had somehow been corrupted. Sanjay looked at Jeff. For months, Google had been experiencing an increasing number of hardware failures. The problem was that, as Google grew, its computing infrastructure also expanded. Computer hardware rarely failed, until you had enough of it—then it failed all the time. Wires wore down, hard drives fell apart, motherboards overheated. Many machines never worked in the first place; some would unaccountably grow slower. Strange environmental factors came into play. When a supernova explodes, the blast wave creates high-energy particles that scatter in every direction; scientists believe there is a minute chance that one of the errant particles, known as a cosmic ray, can hit a computer chip on Earth, flipping a 0 to a 1. The world’s most robust computer systems, at NASA, financial firms, and the like, used special hardware that could tolerate single bit-flips. But Google, which was still operating like a startup, bought cheaper computers that lacked that feature. The company had reached an inflection point. Its computing cluster had grown so big that even unlikely hardware failures were inevitable. Together, Jeff and Sanjay wrote code to compensate for the offending machines. Shortly afterward, the new index was completed, and the war room disbanded. Silverstein was flummoxed. He was a good debugger; the key to finding bugs was getting to the bottom of things. Jeff and Sanjay had gone deeper. Until the March index debacle, Google’s systems had been rooted in code that its founders had written in grad school, at Stanford. Page and Brin weren’t professional software engineers. They were academics conducting an experiment in search technology. When their Web crawler crashed, there was no informative diagnostic message—just the phrase “Whoa, horsey!” Early employees referred to BigFiles, a piece of software that Page and Brin had written, as BugFiles. Their all-important indexing code took days to finish, and if it encountered a problem it had to re-start from the beginning. In the parlance of Silicon Valley, Google wasn’t “scalable.” We say that we “search the Web,” but we don’t, really; our search engines traverse an index of the Web—a map. When Google was still called BackRub, in 1996, its map was small enough to fit on computers installed in Page’s dorm room. In March of 2000, there was no supercomputer big enough to process it. The only way that Google could keep up was by buying consumer machines and wiring them together into a fleet. Because half the cost of these computers was in parts that Google considered junk—floppy drives, metal chassis—the company would order raw motherboards and hard drives and sandwich them together. Google had fifteen hundred of these devices stacked in towers six feet high, in a building in Santa Clara, California; because of hardware glitches, only twelve hundred worked. Failures, which occurred seemingly at random, kept breaking the system. To survive, Google would have to unite its computers into a seamless, resilient whole. Side by side, Jeff and Sanjay took charge of this effort. Wayne Rosing, who had worked at Apple on the precursor to the Macintosh, joined Google in November, 2000, to run its hundred-person engineering team. “They were the leaders,” he said. Working ninety-hour weeks, they wrote code so that a single hard drive could fail without bringing down the entire system. They added checkpoints to the crawling process so that it could be re-started midstream. By developing new encoding and compression schemes, they effectively doubled the system’s capacity. They were relentless optimizers. When a car goes around a turn, more ground must be covered by the outside wheels; likewise, the outer edge of a spinning hard disk moves faster than the inner one. Google had moved the most frequently accessed data to the outside, so that bits could flow faster under the read-head, but had left the inner half empty; Jeff and Sanjay used the space to store preprocessed data for common search queries. Over four days in 2001, they proved that Google’s index could be stored using fast random-access memory instead of relatively slow hard drives; the discovery reshaped the company’s economics. Page and Brin knew that users would flock to a service that delivered answers instantly. The problem was that speed required computing power, and computing power cost money. Jeff and Sanjay threaded the needle with software. Alan Eustace became the head of the engineering team after Rosing left, in 2005. “To solve problems at scale, paradoxically, you have to know the smallest details,” Eustace said. Jeff and Sanjay understood computers at the level of bits. Jeff once circulated a list of “Latency Numbers Every Programmer Should Know.” In fact, it’s a list of numbers that almost no programmer knows: that an L1 cache reference usually takes half a nanosecond, or that reading one megabyte sequentially from memory takes two hundred and fifty microseconds. These numbers are hardwired into Jeff’s and Sanjay’s brains. As they helped spearhead several rewritings of Google’s core software, the system’s capacity scaled by orders of magnitude. Meanwhile, in the company’s vast data centers technicians now walked in serpentine routes, following software-generated instructions to replace hard drives, power supplies, and memory sticks. Even as its parts wore out and died, the system thrived. Today, Google’s engineers exist in a Great Chain of Being that begins at Level 1. At the bottom are the I.T. support staff. Level 2s are fresh out of college; Level 3s often have master’s degrees. Getting to Level 4 takes several years, or a Ph.D. Most progression stops at Level 5. Level 6 engineers—the top ten per cent—are so capable that they could be said to be the reason a project succeeds; Level 7s are Level 6s with a long track record. Principal Engineers, the Level 8s, are associated with a major product or piece of infrastructure. Distinguished Engineers, the Level 9s, are spoken of with reverence. To become a Google Fellow, a Level 10, is to win an honor that will follow you for life. Google Fellows are usually the world’s leading experts in their fields. Jeff and Sanjay are Google Senior Fellows—the company’s first and only Level 11s. The Google campus, set beside a highway a few minutes from downtown Mountain View, is a series of squat, unattractive buildings with tinted windows. One Monday last summer, after a morning of programming together, Jeff and Sanjay went to lunch at a campus cafeteria called Big Table, which was named for a system they’d helped develop, in 2005, for treating numberless computers as though they were a single database. Sanjay, who is tall and thin, wore an ancient maroon Henley, gray pants, and small wire-frame glasses. He spied a table outside and walked briskly to claim it, cranking open the umbrella and taking a seat in the shade. He moved another chair into the sun for Jeff, who arrived a minute later, broad-shouldered in a short-sleeved shirt and wearing stylish sneakers. Like a couple, Jeff and Sanjay tell stories together by contributing pieces of the total picture. They began reminiscing about their early projects. “Behold, as I transform this normal woman into a sexualized prop.” Link copied “We were writing things by hand,” Sanjay said. His glasses darkened in the sun. “We’d rewrite it, and it was, like, ‘Oh, that seems near to what we wrote last month.’ ” “Or a slightly different pass in our indexing data,” Jeff added. “Or slightly different,” Sanjay said. “And that’s how we figure out—” “This is the essence,” Jeff said. “—this is the common pattern,” Sanjay said, finishing their thought. Jeff took a bite of the pizza he’d got. He has the fingers of a deckhand, knobby and leathery; Sanjay, who looks almost delicate in comparison, wondered how they ended up as a pair. “I don’t quite know how we decided that it would be better,” he said. “We’ve been doing it since before Google,” Jeff said. “But I don’t know why we decided it was better to do it in front of one computer instead of two,” Sanjay said. “I would walk from my D.E.C. research lab two blocks away to his D.E.C. research lab,” Jeff said. “There was a gelato store in the middle.” “So it’s the gelato store!” Sanjay said, delighted. Sanjay, who is unmarried, joins Jeff, his two daughters, and his wife, Heidi, on vacations. Jeff’s daughters call him Uncle Sanjay, and the five of them often have dinner on Fridays. Sanjay and Victoria, Jeff’s eldest, have taken to baking. “I’ve seen his daughters grow up,” Sanjay said, proudly. After the Google I.P.O., in 2004, they moved into houses that are four miles apart. Sanjay lives in a modest three-bedroom in Old Mountain View; Jeff designed his house, near downtown Palo Alto, himself, installing a trampoline in the basement. While working on the house, he discovered that although he liked designing spaces, he didn’t have patience for what he calls the “Sanjay-oriented aspects” of architecture: the details of beams, bolts, and loads that keep the grand design from falling apart. “I don’t know why more people don’t do it,” Sanjay said, of programming with a partner. “You need to find someone that you’re gonna pair-program with who’s compatible with your way of thinking, so that the two of you together are a complementary force,” Jeff said. They pushed back from the table and set out in search of soft-serve, strolling through Big Table and its drifting Googlers. Of the two, Jeff is more eager to expound, and while they walked he shared his soft-serve strategy. “I do the squish. I think the pushing-up approach adds stability,” he said. Sanjay, pleased and intent, swirled a chocolate-and-vanilla mix into his cone. In his book “Collaborative Circles: Friendship Dynamics and Creative Work,” from 2001, the sociologist Michael P. Farrell made a study of close creative groups—the French Impressionists, Sigmund Freud and his contemporaries. “Most of the fragile insights that laid the foundation of a new vision emerged not when the whole group was together, and not when members worked alone, but when they collaborated and responded to one another in pairs,” he wrote. It took Monet and Renoir, working side by side in the summer of 1869, to develop the style that became Impressionism; during the six-year collaboration that gave rise to Cubism, Pablo Picasso and Georges Braque would often sign only the backs of their canvases, to obscure which of them had completed each painting. (“A canvas was not finished until both of us felt it was,” Picasso later recalled.) In “Powers of Two: Finding the Essence of Innovation in Creative Pairs,” the writer Joshua Wolf Shenk quotes from a 1971 interview in which John Lennon explained that either he or Paul McCartney would “write the good bit, the part that was easy, like ‘I read the news today’ or whatever it was.” One of them would get stuck until the other arrived—then, Lennon said, “I would sing half, and he would be inspired to write the next bit and vice versa.” Everyone falls into creative ruts, but two people rarely do so at the same time. In the “theory building” phase of a new science or art, it’s important to explore widely without getting caught in dead ends. François Jacob, who, with Jacques Monod, pioneered the study of gene regulation, noted that by the mid-twentieth century most research in the growing field of molecular biology was the result of twosomes. “Two are better than one for dreaming up theories and constructing models,” Jacob wrote. “For with two minds working on a problem, ideas fly thicker and faster. They are bounced from partner to partner. They are grafted onto each other, like branches on a tree. And in the process, illusions are sooner nipped in the bud.” In the past thirty-five years, about half of the Nobel Prizes in Physiology or Medicine have gone to scientific partnerships. After years of sharing their working lives, duos sometimes develop a private language, the way twins do. They imitate each other’s clothing and habits. A sense of humor osmoses from one to the other. Apportioning credit between them becomes impossible. But partnerships of this intensity are unusual in software development. Although developers sometimes talk about “pair programming”—two programmers sharing a single computer, one “driving” and the other “navigating”—they usually conceive of such partnerships in terms of redundancy, as though the pair were co-pilots on the same flight. Jeff and Sanjay, by contrast, sometimes seem to be two halves of a single mind. Some of their best-known papers have as many as a dozen co-authors. Still, Bill Coughran, one of their managers, recalled, “They were so prolific and so effective working as a pair that we often built teams around them.” In 1966, researchers at the System Development Corporation discovered that the best programmers were more than ten times as effective as the worst. The existence of the so-called “10x programmer” has been controversial ever since. The idea venerates the individual, when software projects are often vast and collective. In programming, few achievements exist in isolation. Even so—and perhaps ironically—many coders see the work done by Jeff and Sanjay, together, as proof that the 10x programmer exists. Jeff was born in Hawaii, in July of 1968. His father, Andy, was a tropical-disease researcher; his mother, Virginia Lee, was a medical anthropologist who spoke half a dozen languages. For fun, father and son programmed an IMSAI 8080 kit computer. They soldered upgrades onto the machine, learning every part of it. Jeff and his parents moved often. At thirteen, he skipped the last three months of eighth grade to help them at a refugee camp in western Somalia. Later, in high school, he started writing a data-collection program for epidemiologists called Epi Info; it became a standard tool for field work and, eventually, hundreds of thousands of copies were distributed, in more than a dozen languages. (A Web site maintained by the Centers for Disease Control and Prevention, “The Epi Info Story,” includes a picture of Jeff at his high-school graduation.) Heidi, whom Jeff met in college, at the University of Minnesota, learned of the program’s significance only years later. “He didn’t brag about any of that stuff,” she said. “You had to pull it out of him.” Their first date was at a women’s basketball game; Jeff was in a gopher costume, cheerleading. Jeff’s Ph.D. focussed on compilers, the software that turns code written by people into machine-language instructions optimized for computers. “In terms of sexiness, compilers are pretty much as boring as it gets,” Alan Eustace said; on the other hand, they get you “very close to the machine.” Describing Jeff, Sanjay twirled his index finger around his temple. “He has a model going on as you’re writing code,” he said. “ ‘What is the performance of this code going to be?’ He’ll think about all the corner cases almost semi-automatically.” Sanjay didn’t touch a computer until he went to Cornell, at the age of seventeen. He was born in West Lafayette, Indiana, in 1966, but grew up in Kota, an industrial city in northern India. His father, Mahipal, was a botany professor; his mother, Shanta, took care of Sanjay and his two older siblings. They were a bookish family: his uncle, Ashok Mehta, remembers buying a copy of “The Day of the Jackal,” by Frederick Forsyth, its binding badly worn, and watching the Ghemawat children read the broken book together, passing pages along as they finished. Sanjay’s brother, Pankaj, became the youngest faculty member ever awarded tenure at Harvard Business School. (He is now a professor at N.Y.U. Stern.) Pankaj went to the same school as Sanjay and had a reputation as a Renaissance man. “I kind of lived in the shadow of my brother,” Sanjay said. As an adult, he retains a talent for self-effacement. In 2016, when he was inducted into the American Academy of Arts and Sciences, he didn’t tell his parents; their neighbor had to give them the news. In graduate school, at M.I.T., Sanjay found a tight-knit group of friends. Still, he never dated, and does so only “very, very infrequently” now. He says that he didn’t decide not to have a family—it just unfolded that way. His close friends have learned not to bother him about it, and his parents long ago accepted that their son would be a bachelor. Perhaps because he’s so private, an air of mystery surrounds him at Google. He is known for being quiet but profound—someone who thinks deeply and with unusual clarity. On his desk, he keeps a stack of Mead composition notebooks going back nearly twenty years, filled with tidy lists and diagrams. He writes in pen and in cursive. He rarely references an old notebook, but writes in order to think. At M.I.T., his graduate adviser was Barbara Liskov, an influential computer scientist who studied, among other things, the management of complex code bases. In her view, the best code is like a good piece of writing. It needs a carefully realized structure; every word should do work. Programming this way requires empathy with readers. It also means seeing code not just as a means to an end but as an artifact in itself. “The thing I think he is best at is designing systems,” Craig Silverstein said. “If you’re just looking at a file of code Sanjay wrote, it’s beautiful in the way that a well-proportioned sculpture is beautiful.” At Google, Jeff is far better known. There are Jeff Dean memes, modelled on the ones about Chuck Norris. (“Chuck Norris counted to infinity . . . twice”; “Jeff Dean’s résumé lists the things he hasn’t done—it’s shorter that way.”) But, for those who know them both, Sanjay is an equal talent. “Jeff is great at coming up with wild new ideas and prototyping things,” Wilson Hsieh, their longtime colleague, said. “Sanjay was the one who built things to last.” In life, Jeff is more outgoing, Sanjay more introverted. In code, it’s the reverse. Jeff’s programming is dazzling—he can quickly outline startling ideas—but, because it’s done quickly, in a spirit of discovery, it can leave readers behind. Sanjay’s code is social. “Some people,” Silverstein said, “their code’s too loose. One screen of code has very little information on it. You’re always scrolling back and forth to figure out what’s going on.” Others write code that’s too dense: “You look at it, you’re, like, ‘Ugh. I’m not looking forward to reading this.’ Sanjay has somehow split the middle. You look at his code and you’re, like, ‘O.K., I can figure this out,’ and, still, you get a lot on a single page.” Silverstein continued, “Whenever I want to add new functionality to Sanjay’s code, it seems like the hooks are already there. I feel like Salieri. I understand the greatness. I don’t understand how it’s done.” On a Monday morning this spring, Jeff and Sanjay stood in the kitchenette of Building 40, home to much of Google’s artificial-intelligence division. Behind them, a whiteboard was filled with matrix algebra; a paper about unsupervised adversarial networks lay on a table. Jeff, wearing a faded T-shirt and jeans, looked like a reformed beach bum; Sanjay wore a sweater and gray pants. The bright windows revealed a stand of tall pines and, beyond it, a field. Wherever Jeff works at Google, espresso machines follow. On the kitchenette’s counter, a three-foot-wide La Marzocco hummed. “We’re running late,” Sanjay said, over a coffee grinder. It was eight-thirty-two. After cappuccinos, they walked to their computers. Jeff rolled a chair from his own desk, which was messy, to Sanjay’s, which was spotless. He rested a foot on a filing cabinet, leaning back, while Sanjay surveyed the screen in front of them. There were four windows open: on the left, a Web browser and a terminal, for running analysis tools; on the right, two documents in the text editor Emacs, one a combination to-do list and notebook, the other filled with colorful code. One of Sanjay’s composition notebooks lay beside the computer. “All right, what were we doing?” Sanjay asked. “It’s just that if you lie to me about flossing how can I trust anything else you say?” Link copied “I think we were looking at code sizes of TensorFlow Lite,” Jeff said. This was a major new software project related to machine learning, and Jeff and Sanjay were worried that it was bloated; like book editors, they were looking for cuts. For this task, they’d built a new tool that itself needed to be optimized. “So I was trying to figure out how slow it is,” Sanjay said. “It’s pretty slow,” Jeff said. He leaned forward, still relaxed. “So that one was a hundred twenty kilobytes,” Sanjay said, “and it was, like, eight seconds.” “A hundred twenty thousand stack calls,” Jeff said, “not kilobytes.” “Well, kilobytes of text, yeah—about,” Sanjay said. “Oh, yeah, sorry,” Jeff said. “I don’t quite know what threshold we should pick for a unit size,” Sanjay said. “Half a meg?” “Seems good,” Jeff said. Sanjay began to type, and Jeff was drawn into the screen. “So you’re just saying, if it’s bigger than that we’ll just sample . . .” He left the rest unsaid; Sanjay answered him in code. When Sanjay drives, he puts his hands at ten and two and stares attentively ahead. He is the same way at the keyboard. With his feet spread shoulder-width apart, he looked as if he were working on his posture. His spindly fingers moved gently across the keys. A few younger programmers began to trickle in. Soon they reached a minor milestone, and Sanjay typed a command to test their progress. Seeming worn out, he checked his e-mail while it ran. The test finished. He didn’t notice. “Hey,” Jeff said. He snapped his fingers and pointed at the screen. Although in conversation he is given to dad jokes and puns, he can become opinionated, brusque, and disapproving when he sits at a computer with Sanjay. Sanjay takes this in stride. When he thinks Jeff is moving too fast, he’ll lift his hands off the keyboard and spread his fingers, as if to say, “Stop.” (In general, Jeff is the accelerator, Sanjay the brake.) This is as close as they get to an argument: in twenty years together, they can’t remember raising their voices. Sanjay scrolled, bringing a new section of code into view. “Like, all that can be made into a routine, couldn’t it?” Jeff said. “Mmm,” Sanjay agreed. Jeff cracked his knuckles. “Seems doable. Should we do that?” Sanjay was wary. “No, I—” “So we’re going to ignore a problem?” Jeff said indignantly. “No, I mean, we’re just trying to get an idea of the types of things that are going on. So we could make notes about it, right?” “O.K.,” Jeff said happily, his mood having turned on a dime. They dictated a note together. Lunchtime approached. They had worked for two hours with one ten-minute break, talking most of the time. (A lesser programmer watching them would have been impressed, more than anything else, by the fact that they never stopped or got stuck.) It’s standard engineering practice to have your code reviewed by another coder, but Jeff and Sanjay skip this step, entering, in their log, a perfunctory “lgtm,” for “looks good to me.” In a sense, they had been occupied by minutiae. Their code, however, is executed at Google’s scale. The kilobits and microseconds they worry over are multiplied as much as a billionfold in data centers around the world—loud, hot, warehouse-size buildings whose unending racks of processors are cooled by vats of water. On days like these, Jeff has been known to come home and tell his daughters, “Sanjay and I sped up Google Search by ten per cent today.” Jeff and Sanjay gave Google what was arguably its biggest single upgrade in the course of four months in 2003. They did it with a piece of software called MapReduce. The idea came to them the third time they rewrote Google’s crawler and indexer. It occurred to them that, each time, they had solved an important problem: coördinating work in a vast number of geographically distributed, individually unreliable computers. Generalizing their solution would mean that they could avoid revisiting the problem again and again. But it would also create a tool that any programmer at Google could use to wield the machines in its data centers as if they were a single, planet-size computer. MapReduce, which Jeff and Sanjay wrote in a corner office overlooking a duck pond, imposed order on a process that could be mind-bendingly complicated. Before MapReduce, each programmer had to figure out how to divide and distribute data, assign work, and account for hardware failures on her own. MapReduce gave coders a structured way of thinking about these problems. Just as a chef maintains mise en place—prepping ingredients before combining them—so MapReduce asks programmers to divide their tasks into two stages. First, a coder tells each machine how to do the “map” stage of a task (say, counting how many times a word appears on a Web page); next, she writes instructions for “reducing” all the machines’ results (for instance, by adding them up). MapReduce handles the details of distribution—and, by doing so, hides them. The following year, Jeff and Sanjay rewrote Google’s crawling and indexing system in terms of MapReduce tasks. Soon, when other engineers realized how powerful it was, they started using MapReduce to process videos and render the tiles on Google Maps. MapReduce was so simple that new tasks kept suggesting themselves. Google has what’s known as a “diurnal usage curve”—there’s more traffic during the day than there is at night—and MapReduce tasks began soaking up the idle capacity. A dreaming brain processes its daytime experiences. Now Google processed its data. There were inklings, early on, that Google was an A.I. company pretending to be a search company. In 2001, Noam Shazeer, who shared an office with Jeff and Sanjay, had grown frustrated with the spell-checker that Google was licensing from another company: it kept making embarrassing mistakes, such as telling users who’d typed “TurboTax” that they probably meant “turbot ax.” (A turbot is a flatfish that lives in the North Atlantic.) A spell-checker is only as good as its dictionary, and Shazeer realized that, in the Web, Google had access to the biggest dictionary there had ever been. He wrote a program that used the statistical properties of text on the Web to determine which words were likely misspellings. The software learned that “pritany spears” and “brinsley spears” both meant “Britney Spears.” When Shazeer demonstrated the program at Google’s weekly T.G.I.F. gathering, employees tried, but mostly failed, to fool it. In collaboration with Jeff and an engineer named Georges Harik, Shazeer applied similar techniques to associate ads with Web pages. Ad targeting became a river of money that the company directed back into its computing infrastructure. It was the beginning of a feedback loop—bigness would be the source of Google’s intelligence; intelligence the source of its wealth; and wealth the source of its growth—that would make the company extraordinarily, and unsettlingly, dominant. As enterprising coders used MapReduce to derive insights from Google’s data, it became possible to transcribe users’ voice mails, answer their questions, autocomplete their queries, and translate among more than a hundred languages. Such systems were developed using relatively uncomplicated machine-learning algorithms. Still, “very simple techniques, when you have a lot of data, work incredibly well,” Jeff said. As “data, data, data”—stored and processed with BigTable, MapReduce, and their successors—became the company’s prime directive, Google’s globe-spanning infrastructure became more seamless and supple. The idea of distributed computation was an old one; concepts like “cloud computing” and “big data” predated Google’s rise. But, by making it intellectually manageable for ordinary coders to write distributed programs, Jeff and Sanjay had given Google a new level of mastery over such technologies. Users may have sensed that something had changed: Google’s cloud was getting smarter. In 2004, because Jeff and Sanjay thought it would be useful to astronomers, geneticists, and other scientists with lots of data to process, they wrote a paper, “MapReduce: Simplified Data Processing on Large Clusters,” and made it public. The MapReduce paper arrived as a deus ex machina. Cheap hardware and the growth of Web services and connected devices had led to a deluge of data, but few companies had the software to process the information. Two engineers who’d been struggling to scale a small search engine called Nutch—Mike Cafarella and Doug Cutting—were so convinced of MapReduce’s importance that they decided to build a free clone of the system from scratch. They eventually called their project Hadoop, after a stuffed elephant beloved by Cutting’s son. As Hadoop matured, it was adopted by half of the Fortune 50. It became synonymous with “Big Data.” Facebook used “Hadoop MapReduce,” as it’s often known, to store and process user metadata—information about what was clicked, what was Liked, and which ads were viewed. At one point, it had the largest Hadoop cluster in the world. Hadoop MapReduce helped power LinkedIn and Netflix. Randy Garrett, a former director of technology at the National Security Agency, remembers demonstrating the technology to the agency’s director, General Keith Alexander. Hadoop performed an analysis task eighteen thousand times faster than the previous system had. It became the foundation for a new approach to intelligence gathering which some observers call “collect it all.” Jeff has a restless nature: a problem becomes less interesting to him once he can see the shape of its solution. In 2011, as the world embraced the cloud, he began collaborating with Andrew Ng, a computer-science professor from Stanford who was leading a secretive project at Google to conduct research on neural networks—software programs composed of virtual “neurons.” Jeff had encountered neural nets during his undergraduate days; back then, they hadn’t been able to solve real-world problems. Ng had told Jeff that this was changing. At Stanford, researchers had achieved some exciting results when the nets were given access to large quantities of data. With Google’s scale, Ng thought, neural networks could become not just useful but powerful. “I can’t believe I ate all that salad for nothing.” Link copied Neural networks are profoundly different from traditional computer programs. Their behavior isn’t specified by coders in the usual way; instead, it’s “learned” using inputs and feedback. Jeff’s knowledge of neural networks hadn’t advanced much since his undergrad years, and Heidi watched as their bathroom filled with textbooks. Jeff began spending about a day a week on the project, which was called “Google Brain.” Many at Google were doubtful of the technology. “What a waste of talent,” Alan Eustace, his manager at the time, remembers thinking. Sanjay couldn’t understand Jeff’s move, either. “You work on infrastructure,” he thought. “What are you doing over there?” During the next seven years, the Google Brain team developed neural nets that beat the state of the art in machine translation and speech and image recognition. Eventually, they replaced Google’s most important algorithms for ranking search results and targeting ads, and Google Brain became one of the fastest-growing teams in the company. Claire Cui, an engineer who started in 2001, said that Jeff’s involvement marked a turning point for A.I. at Google: “There were people who believed in it, and there were people who didn’t believe in it. Jeff proved that it can work.” A.I. had turned out to depend on scale, which Jeff, the systems engineer, delivered. As part of the effort, he led the development of a program called TensorFlow—an attempt to create something like the MapReduce of A.I. TensorFlow simplified the task of distributing a neural network across a fleet of computers, turning them into one big brain. In 2015, when TensorFlow was released to the public, it became the lingua franca of A.I. Recently, Sundar Pichai, Google’s C.E.O., declared it an “A.I. first” company and made Jeff the head of its A.I. initiatives. Jeff now spends four days a week running Google Brain. He directs the work of three thousand people. He travels to give talks, holds a weekly meeting to work on a new computer chip (the Tensor Processing Unit, designed specifically for neural networks), and is helping with the development of AutoML, a system that uses neural nets to design other neural nets. He has time to code with Sanjay only once a week. Feats of engineering tend to erase themselves. We remember the great explorers of the eighteenth century—James Cook, George Vancouver—but not John Harrison, the carpenter from Yorkshire who, after decades of work, made a clock reliable enough to reckon longitude at sea. Recently, Jeff and Sanjay were enjoying margaritas and enchiladas at Palo Alto Sol, a Mexican restaurant they frequent, when Jeff pulled out his phone. “When did Gmail first come out?” he asked. The phone replied, “April 1st, 2004.” Sanjay, who is sensitive in social situations, seemed not to appreciate the dinner-table distraction, but Jeff was elated. Google could now talk, listen, and answer questions, using a stack of programs, seamlessly integrated and largely invisible, stretching from his phone to data centers around the world. Today, their roles have diverged. At Google, Sanjay is what is known as an “individual contributor”—a coder who works alone and manages no one. For this, he feels grateful. “I would not want Jeff’s job,” he says. He’s currently working on software that makes it easier for engineers to combine and control the dozens of programs—for fetching news, photographs, prices—that start running as soon as a user enters text into Google’s search box. Once a week, he meets with a group of “Area Tech Leads”—Google’s engineering Jedi council—to make technical decisions that affect the entire company. If Google were a house, Jeff would be building an addition. Sanjay is shoring up the structure, reinforcing the beams, tightening the bolts. Meanwhile, during their Monday coding sessions, they have started something new. It’s an A.I. project: an attempt, Jeff says, to train a “giant” machine-learning model to do thousands, or millions, of different tasks. Jeff has been thinking about the idea for years; recently, he decided that it was possible. He and Sanjay plan to build a prototype that a team can grow around. In the world of software, the best way to lead is with code. “I think they miss each other,” Heidi, Jeff’s wife, says. It was when their collaboration slowed that they began having their Friday dinners. On a Sunday in March, Jeff and Sanjay met for a hike outside Cupertino. The weather was clear and brisk, but hot in the sun. Jeff arrived at the trailhead in a blue Tesla Roadster with a Bernie 2016 bumper sticker. Sanjay, close behind, had his own Tesla, a red Model S. Sanjay had spent the morning reading. Jeff had played soccer. (A device attached to his calf told him that he’d run 7.1 miles.) Two decades after building the March index, Jeff resembled a retired endurance athlete, his skin worn by the sun. Sanjay seemed hardly to have aged. The path was a six-mile loop that climbed through dense forests. Jeff led the way. In the woods, they reminisced about how quickly Google had grown. Sanjay recalled how, during the company’s first growth spurt, a plumber had installed two toilets in a single stall in the men’s bathroom. “I remember Jeff’s comment,” he said. “ ‘Two heads are better than one!’ ” He laughed. They descended out of the woods and into dry, exposed country. A turkey vulture flew overhead. “The hills here are steeper than I thought,” Jeff said. “I thought somebody said this was a pretty flat hike,” Sanjay said. “I guess this explains why there’s no biking roads up that side,” Jeff said. They climbed back into the woods. On a switchback, Jeff caught a glimpse beyond the trees. “We’re gonna have a good lookout at some point,” he said. The trail opened out onto a hilltop, high and wide, treeless, with panoramic views. There was a haze on the horizon. Still, they could see the Santa Cruz Mountains to the south and Mission Peak to the east. “Sanjay, there’s your office!” Jeff said. They stood together, looking across the valley. ♦
1
A First Look at MarkoJS – eBay's JavaScript Framework
As some of you may know I recently joined the Marko team, but I've been working on a future version and haven't actually got to dig into the current. So join me as I learn about how Marko works. Today we are going to look at building a simple application using MarkoJS. What is MarkoJS you ask? It's JavaScript UI Framework developed at eBay in 2013 with a keen focus on server side rendering. More than being built at eBay, the majority of eBay is built on it. If you haven't heard of it before you are in shared company. Even though built by a larger tech company, Marko has never had the exposure or carried the same clout as libraries like React or Angular. Marko has its unique heritage and has very obviously inspired libraries like Vue or Svelte. But most amazingly is the things it has done best since the beginning it is still the best at half a decade later. Things like automatic partial hydration, streaming while you load/render, and having the fastest JS Framework server rendering. Going to the website at https://markojs.com/ I can see right away Marko uses Single File Components similar to Vue and Svelte*. The second thing I notice is the syntax is a bit unusual. ${state.count} on-click (" increment ") > Click me! Enter fullscreen mode Exit fullscreen mode It looks like HTML but it has additional special syntax on the tags. Marko views itself as markup-based language. A superset of HTML. This is like the antithesis of "It's just JavaScript". It makes sense since Marko has its roots in server side template languages like Jade, Handlebars, or EJS. And that has influenced its design immensely, and also served as a high bar to reach in terms of SSR rendering performance. *Note: Marko does support splitting a component across multiple files if desired: https://markojs.com/docs/class-components/#multi-file-components So let's take the Marko CLI for a test run. You can get started with Marko with: npx @marko/create Enter fullscreen mode Exit fullscreen mode There is a short interactive cli asking for project name and what template I'd like to use. Let's choose the default template. This creates a template with a basic folder structure already built. It looks like a pretty standard setup with a src directory with components and pages directories. Firing it up in VSCode it looks like: The first thing I guess to notice is there is no index.js. No entry point. It appears that Marko is built with Multi-Page Apps in mind. You just make a page in the Pages directory and that is your route. There is an index.marko which serves as the landing page: title= "Welcome to Marko" > src= "./logo.svg" alt= "Marko" /> Edit ./pages/index.marko and save to reload. href= "https://markojs.com/docs/" > Learn Marko style { .container { display:flex; flex-direction: column; justify-content: center; align-items: center; font-size:2em; color: #fff; background: #111; height:100%; width:100%; } img.logo { width:400px; } } Enter fullscreen mode Exit fullscreen mode This page has a markup block and a style block. The markup starts with layout components that wrap the content of the page which seems to be a logo and docs site link. Looking at the app-layout component we do in fact see our top level HTML structure: lang= "en" > charset= "UTF-8" > name= "viewport" content= "width=device-width, initial-scale=1.0" > name= "description" content= "A basic Marko app." > </p>${input.title} < ${ input.renderBody } /> style { html, body { font-family: system-ui; padding: 0; margin: 0; } code { color: #fc0; } a { color: #09c; } } Enter fullscreen mode Exit fullscreen mode So the pattern seems to be an entry point for each page and we can share components between them to create common layouts and controls. input is the equivalent to props in some libraries. And input.renderBody looks to be the replacement for props.children. There is a subtle difference in that you can think of renderBody's as function calls. The children aren't created until that part of the template is executed. The last component mouse-mask does some manipulation of the mouse input to create an interesting visual effect over our logo. Not going to focus on that for the moment though. Let's just run the example. We can startup Marko's Dev server by running: npm run dev Enter fullscreen mode Exit fullscreen mode This automatically starts it building in watch mode and serving our files over port 3000. Loading it in the browser, we can see as we move our mouse over the page we can see the visual effect. We can also try the production build with npm run build And then view it using npm start. A quick view in the chrome inspector shows this simple example weighs in at 15.2kb. Looking at the chunks it is fair to say Marko weighs in around 13kb. Not the smallest library but that is comparable to Inferno, or Mithril and comes in under any of the most popular libraries. That's all fine. But I want to make my own site out of this. So I deleted everything except the app-layout component and emptied the Marko template. I'm no CSS expert but I figured I could throw together a quick directory for a personal blog inspired by the design of a popular developer's blog: For this exercise I just threw some data at the top of the index.marko file. I also included a function to properly format the dates. static const POSTS = [ { title : " Making Sense of the JS Framework Benchmark " , caption : " A closer look at the best benchmark for JS Frameworks " , link : " https://dev.to/ryansolid/making-sense-of-the-js-framework-benchmark-25hl " , date : " 10/29/2020 " , duration : 9 }, { title : " Why I'm not a fan of Single File Components " , caption : " Articial boundaries create artificial overhead " , link : " https://dev.to/ryansolid/why-i-m-not-a-fan-of-single-file-components-3bfl " , date : " 09/20/2020 " , duration : 6 }, { title : " Where UI Libraries are Heading " , caption : " Client? Server? The future is hybrid " , link : " https://dev.to/ryansolid/where-web-ui-libraries-are-heading-4pcm " , date : " 05/20/2020 " , duration : 8 }, { title : " Maybe Web Components are not the Future " , caption : " Sometimes a DOM element is just a DOM element " , link : " https://dev.to/ryansolid/maybe-web-components-are-not-the-future-hfh " , date : " 03/26/2020 " , duration : 4 }, ] static function formatDate ( date ) { const d = new Date ( date ); return d . toLocaleDateString ( " en-US " , { year : ' numeric ' , month : ' long ' , day : ' numeric ' }); } Enter fullscreen mode Exit fullscreen mode Notice the use of the word static as this tells Marko's compiler to run this once on loading the file and it exists outside of the template instance. From there I added some markup to render this data. It's mostly HTML. Interestingly enough Marko doesn't need any sort of delimiter for attribute assignment. There are no { } or the like. title= "Solidarity.io" > class= "container" > Solidarity class= "intro-header" > class= "avatar" alt= "avatar" src= "https://pbs.twimg.com/profile_images/1200928608295849984/1A6owPq-_400x400.jpg" > A personal blog by href= "https://twitter.com/RyanCarniato" target= "_blank" >Ryan Carniato class= "blog-list" > | post | of= POSTS > class= "blog-list-item" > href= post.link target= "_blank" >${post.title} ${formatDate(post.date)} • | coffee | from= 0 to= (post.duration/5) > ☕️ ${post.duration} minute read ${post.caption} style { .container { display:flex; flex-direction: column; justify-content: center; align-items: center; color: #fff; background: #333; height:100%; width:100%; min-height: 100vh; } .avatar { width: 50px; border-radius: 50%; } .blog-list { list-style-type: none; margin: 0; padding: 0; } .blog-list-item h3 { font-size: 1rem; margin-top: 3.5rem; margin-bottom: 0.5rem; } .blog-list-item a { color: light-blue; text-decoration: none; font-size: 2em; font-weight: 800 } } Enter fullscreen mode Exit fullscreen mode The key to this example is using the <for> component. I use it both to iterate over the list of posts and to iterate over the range to show my cups of coffee (one per 5 mins of reading time). This is definitely the biggest syntax difference: | post | of= POSTS > href= post.link >${post.title} Enter fullscreen mode Exit fullscreen mode What is this even doing? Well the pipes are something Marko calls Tag Parameters. It is basically a way to do the equivalent of render props. If this were a React Component we'd write: < For of = { POSTS } > { ( post ) => < a href = { post . link } > { post . title } a > } For > Enter fullscreen mode Exit fullscreen mode And that's it. The end result is we have our simple blog landing page. Just to see how it looks I made the production build and ran it. Everything looks good. But I think the most noticeable thing is the JS bundle size. There is None Right, we didn't do anything that required JavaScript in the client so we didn't need to ship the Marko runtime or any bundled JS to the client. Marko is optimized out of the gate with no manual interference to only ship the JavaScript you need. Well this wasn't meant to be deep. Just a first look into running MarkoJS. I will say it definitely has a syntax to get used to. I think it is interesting that for a Tag based UI Language it has a lot of the same features you'd find in just JavaScript libraries. Patterns like HoCs(Higher Order Components) and Render Props seem to be perfectly applicable here. The experience was so similar to developing in other modern JavaScript frameworks I forgot for a second it was server oriented that defaults to shipping minimal JavaScript to the browser. In our case since this was completely static there as no JavaScript sent. I'm a client-side oriented at heart so this was definitely a departure for me. No JavaScript by default is a new world of possibilities for a whole category of sites. I hope you will join me next time when I continue to explore MarkoJS and unveil all its powerful features.
1
Why tracking your product’s performance with Event Data is ineffective
Many businesses have built great analytics products to help with tracking the actions your users are taking in your product (Mixpanel, Pendo, and Amplitude, to name a few). These products use an events-based data model where they track user behavior, usually client-side, so you can understand and visualize behavior like page views and button clicks. While this event-based model has been somewhat helpful at modeling high-level funnels and actions, we’ve learned over time that this model often creates more problems and challenges than they actually solve. Speaking as a product manager who has lobbied to integrate these kinds of analytics tools in the past, I’m sharing my thoughts on why events are usually wrong for tracking product usage. When I worked at TaskRabbit, my fellow product managers and I prioritized integrating one of these event-based analytics tools into our products. The promise was very enticing: gaining visibility into how our users were using our apps, allowing us to analyze our funnels, and then driving product insights. While this tool was great for tracking client-side actions like button clicks and page views, our funnel analyses inevitably needed to incorporate server-side actions for us to get the full picture. However, our engineering team was reluctant to add more events for a few reasons: The engineers obliged due to my persistence, but I should have listened. Now, every new feature had an additional level of overhead and discussion: do we need to add a new event both server-side and client-side? How do we want to track it? How much time will it take to add this tracking? Do we really need to track this event? Let’s say you do choose to instrument that event: doing so is an extra task that your engineering team works on, slowing down velocity. And then, be honest with yourself: will you actually look at data? What if you want to change or remove the event tracking? That’s another task for your eng team again. And on the other end of the spectrum, let’s say you choose not to instrument the event. A month goes by and now you want to understand some behavior you’re seeing in your product. To diagnose the issue, you need funnel data that doesn’t exist. And even if you ask an engineer to instrument that event now, you won’t have data going back in time. You’re stuck. With events, you now have multiple data sources. Do you trust the data in Google Analytics/Amplitude and friends? Or do you trust your data warehouse? Analyzing your funnel now becomes fraught as you might end up combining data between different data sources. The data in your data warehouse will probably differ from the data that you see in Mixpanel, so you can either spend days if not weeks trying to debug the source of the discrepancy, or you can just accept that there’s some amount of variance (probably 10%-ish) between your data sources. Inevitably, you’ll likely pick one data source as your source of truth and just ignore or heavily discount the others. As a result of all of these factors, event data is at best directional in nature. Your Data Science and Engineering teams won’t look at event data as a source of truth. They’ll look at your product data or data warehouse and trust that over event data almost always. As such, event data is at best useful for a high-level understanding for funnel performance and directional insights. But let’s say with those insights, you actually want to start personalizing and iterating on the user experience. Here’s where events fall down again. To personalize that user experience effectively, directional data can only take you so far-- you want to personalize that experience down to the specific user, and being able to identify that specific user by user ID or account requires source-of-truth data such as your product data or data warehouse. There are a number of different interaction types in products: In the first instance, if the funnel consists entirely of actions that happen while the user is using your app or your website, then event-based analytics might work for you. In most cases though, some processes or functions in your business actually happen once the user has “left the funnel”. As a result, either another user is taking an action or your product is doing something asynchronously. As a result, building this funnel becomes trickier, and relying purely on front-end events is fraught with inconsistencies. This use case happens more often than not. Are you notifying other users of a new post? Can a purchase be refunded or incomplete in some way? To really get the full picture of your customer’s experience, you need to consider more than just what buttons they clicked. One of the major challenges with events I mentioned earlier is that events are only useful from the date when you first implemented them. If you want to do analysis on how someone has gone through your funnel, you won’t have data going back through the beginning of your product’s history. While some services are starting to support historical backfilling, normally these processes come at a significant cost, both in terms of price as well as engineering time. If any of these event-based services ever has an outage, the event data during that time is lost forever. As a result, you can’t rely on event data for core product flows such as triggering emails or campaigns, or modifying the product UX. In order to remedy these situations, you’ll have to do substantial work to recover from your logs and de-duplicate any events that may have been captured. It’s a ton of work, and often not worth it. Instead of relying on these event streams, use the data you already have and trust: your product database or data warehouse like Redshift. Your product is already running off of your product database and that’s the data that you trust. The open-source product we’ve built at Grouparoo allows you to pull in data from your source-of-truth data sources, segment your users into groups, and then send those user profiles and groups to 3rd party tools like email providers, customer support tools, and push providers. Our product is open-source and free, so feel free to read our docs to try it out, or reach out to us if you want to chat!
2
Show HN: Manage Kubernetes Admission Webhook's Certificates with Cert-Manager
developer-guy p Follow Published in Trendyol Tech p 11 min read p Jan 6, 2022 -- Listen Share Authors: Furkan Türkal developer-guy Erkan Zileli · ⛵️ Kubernetes Admission Controllers · 📝 cert-manager and CA Injector · 🔐 Vault PKI (Public Key Infrastructure) · 💻 Installation · 👀 How to monitor certificates? · ✨ How to accomplish hot-reloading your HTTP server with renewed certificates without having downtime? · 🎯Conclusion The certificate management process has always been a problem for people who want to manage their certificates for their applications in various environments. Because there are always challenges, such as how to store them, where to store them, how to revoke or renew them, etc., today, we'll talk about managing your certificates in a Kubernetes platform for your Kubernetes Admission Webhook applications in a more automated fashion. We had talked about five ways of managing certificates in our previous post; if you want to learn more about them, you can reach out to our last blog post. In the previous post, we also mentioned the cert-manager CA Injector, and we have used a self-signed issuer at that post, but today we'll be using the Vault issuer by using Vault's PKI secret engine. Without further ado, let's jump into the details of all of the technologies we will use in this guide. As I mentioned above, we'll be talking about managing certificates on behalf of the applications of what we called Kubernetes Admission Webhooks. Still, this approach is not tied to these types of applications only. So you can use the same method for your other kind of applications as long as they are working on Kubernetes. An admission controller is a piece of code that intercepts requests to the Kubernetes API server before the persistence of the object but after the request is authenticated and authorized. There are two special controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. These execute the mutating and validating (respectively) admission control webhooks configured in the API. In addition to compiled-in admission plugins, admission plugins can be developed as extensions and run as webhooks configured at runtime. Admission webhooks are HTTP callbacks that receive admission requests and do something with them. Since a webhook must be served via HTTPS, we need proper certificates for the server. These certificates can be self-signed (instead: signed by a self-signed CA), but we need Kubernetes to instruct the respective CA certificate when talking to the webhook server. In addition, the common name (CN) of the certificate must match the server name used by the Kubernetes API server, which for internal services is `<service-name>. "<namespace>.svc.` https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/ Now that we understand the Kubernetes Admission Webhooks and the need for certification let's continue by explaining how the cert-manager can help us manage these certificates. Certificate lifecycle management is a critical requirement for a Kubernetes Admission Webhook. As we mentioned, the cert-manager comes into play to simplify that process by saving engineers time and improving security by negating the risk of human error.strong is well known and more like a de facto standard software for managing certificates on Kubernetes environments. Cert-manager is a popular open-source tool that automates issuing certificates on-demand using Kubernetes APIs and renewing the certificates before they expire. It uses Kubernetes CRDs heavily to do that. In addition, it provides valuable abstractions for TLS certificates and various types of Issuers in the form of Custom Resource Definitions, making it super easy to use and manage from an end-user perspective. We'll be talking about issuers in detail when we install a cert-manager. Still, it is worth mentioning that ​the first thing you'll need to configure after you've established a cert-manager is an issuer which you can then use to issue certificates. The cert-manager comes with several built-in certificate issuers. Vault is one of them, and this is the issuer that we'll demonstrate today. The Vault Issuer represents the certificate authority Vault- a multi-purpose secret store that can be used to sign certificates for your Public Key Infrastructure (PKI). To learn more about them, you can reach out to the official documentation from here. So far, so good, we've created certificates for the application. Still, there is one more thing left that we need to do: provide CA (Certificate Authority) information in the Kubernetes Admission Webhook registration manifest, which refers to the caBundle property of the configuration. This is where CA Injector comes into the picture. CA Injector helps configure the CA certificates for Mutating Webhooks, Validating Webhooks, and Conversion Webhooks. An injectable resource MUST have one of these annotations: cert-manager.io/inject-ca-from, cert-manager.io/inject-ca-from-secret, or cert-manager.io/inject-apiserver-ca, depending on the injection source. To learn more about them, you can reach out to the official documentation from here. In a nutshell, the Vault PKI secrets engine can streamline generating dynamic X.509 certificates. By using this secrets engine, services can get certificates without going through the usual manual process of generating a private key and CSR, submitting to a CA, and waiting for a verification and signing process to complete. This is done while also providing an authentication and authorization mechanism to validate. To learn more about them, you can reach out to the official documentation from here. This guide assumes that we have a webhook application called config-admission-webhook. It lives under the platform namespace behind a Kubernetes service called config-sidecar-injector-service. First, we need to install Vault on Kubernetes with dev mode enabled. Because we don't want to deal with sealing/unsealing and managing store stuff. To do so, we'll be using Helm to install Vault because HashiCorp provides an excellent chart for Vault. By the way, you can use any Kubernetes clusters to achieve the same hands-on. In this case, we'll be using Minikube. So, let's start by creating our Kubernetes cluster. $ minikube start -p demo Then, install Vault by making use of Helm. Once you deploy Vault, please ensure that you have a running pod named vault-0 in your default namespace before moving into the next step. $ kubectl get podsNAME READY STATUS RESTARTS AGEvault-0 1/1 Running 0 32s The next step will be configuring the Vault to enable the PKI secrets engine. Once you deploy Vault in dev mode enabled, your root password will be “root.” We’ll be using the commands provided in the official documentation of the Vault website. You can reach out to the commands and details on this page. Before doing that, we should log in to Vault by making use of the following command: $ kubectl exec vault-0 -- vault login root Success! You are now authenticated. The token information displayed below is already stored in the token helper. You do NOT need to run “vault login” again. Future Vault requests will automatically use this token. Key Value— — — — -token roottoken_accessor cDLx7PbVXcY3ibzweBBgki0htoken_duration ∞token_renewable falsetoken_policies [“root”]identity_policies []policies [“root”] First, start an interactive shell session on the vault-0 pod. The commands we will execute after that will be inside the pod until we warn you that you should go back to your terminal. $ kubectl exec -ti vault-0 -- /bin/sh/ $ And run the commands within the following gist: As I said before, we need to retrieve a token from Vault that can work with the PKI secrets engine. This is where Kubernetes Authentication methods came into play. Then run the exec command to start an interactive shell, then run the following commands within the gist to configure the Kubernetes Authentication method: The next part is installing the cert-manager itself. To do so, run the following commands within the following gist: Please ensure that all the pods are running within the cert-manager namespace before moving into the next step: $ kubectl get pods -n cert-managerNAME READY STATUS RESTARTS AGEcert-manager-57d89b9548-wmlrl 1/1 Running 0 9m39scert-manager-cainjector-5bcf77b697-rsbh9 1/1 Running 0 9m39scert-manager-webhook-8687fc66d4–9hfvq 1/1 Running 0 9m39s Once you set up everything, the next thing is configuring the Vault issuer, lets do that. To do so, you need to apply the following manifest file, but before we need to create a service account within the platform namespace: $ kubectl create serviceaccount issuer -n platform $ ISSUER_SECRET_REF=$(kubectl get serviceaccount issuer -n platform -o json | jq -r ".secrets[].name"); echo $ISSUER_SECRET_REF $ issuer-token-7z8jj Now, let's create the first certificate. It will be as simple as what we've done so far. We'll be applying the following manifest: Once you apply the manifest, you should see the status as Ready: $ kubectl get certificates -n platform config-sidecar-injector-serviceNAME READY SECRET AGEconfig-sidecar-injector-service True config-admission-webhook-tls 5s Also, you can get the certificate's details by using the view-secret kubectl plugin and the cfssl tool. $ kubectl view-secret config-admission-webhook-tls -n platform tls.crt | cfssl certinfo -cert - Do not forget to add config-admission-webhook-tls as a volume Last but least, in this section, we should add an annotation to let cert-manager CA Injector set the caBundle to the webhook configuration. To do so, we'll be editing the webhook configuration like this: Do not forget, you should leave the caBundle property empty of the webhook configuration. apiVersion: admissionregistration.k8s.io/v1kind: MutatingWebhookConfigurationmetadata: annotations: cert-manager.io/inject-ca-from: platform/config-sidecar-injector-service Once you save the configuration, you should get an output from the following command: $ yq e ".webhooks[0].clientConfig.caBundle" <(k get mutatingwebhookconfigurations.admissionregistration.k8s.io config-sidecar-injector -oyaml | k neat) | base64 -D | cfssl certinfo -cert - “subject”: {“common_name”: “config-sidecar-injector-service.platform.svc”,“names”: [ “config-sidecar-injector-service.platform.svc” ]},“issuer”: {“common_name”: “config-sidecar-injector-service.platform.svc”,“names”: [ “config-sidecar-injector-service.platform.svc” ]},“serial_number”: “568486349258568295357093419160671126608237098261”,“sans”: [“config-sidecar-injector-service”,“config-sidecar-injector-service.platform”,“config-sidecar-injector-service.platform.svc”],“not_before”: “2022–01–04T18:40:41Z”,“not_after”: “2023–01–04T18:41:10Z”,“sigalg”: “SHA256WithRSA”,“authority_key_id”: “4D:13:A6:D2:85:CF:36:98:FD:65:3E:F5:27:A5:38:EF:40:71:90:3E”,“subject_key_id”: “4D:13:A6:D2:85:CF:36:98:FD:65:3E:F5:27:A5:38:EF:40:71:90:3E”,“pem”: “ — — -BEGIN CERTIFICATE — — -\nMIID5jCCAs6gAwIBAgIUY5PPPi69t64ZpuNVWFRuPu6o9RUwDQYJKoZIhvcNAQEL\nBQAwNzE1MDMGA1UEAxMsY29uZmlnLXNpZGVjYXItaW5qZWN0b3Itc2VydmljZS5w\nbGF0Zm9ybS5zdmMwHhcNMjIwMTA0MTg0MDQxWhcNMjMwMTA0MTg0MTEwWjA3MTUw\nMwYDVQQDEyxjb25maWctc2lkZWNhci1pbmplY3Rvci1zZXJ2aWNlLnBsYXRmb3Jt\nLnN2YzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL4B3h/Ps2xyumsd\nxkm+Qnl5ZFcRWc0AmEFQRxDNS0z7T/MSjSNoFb+TbuE2hhmbxkFPA45/dUotxp9i\n6ZNMglirzwaxAyI+8MRGkKRoHKqNN/gj8MC9aUqhy38CImbl2AYiGD0jPx/GTj45\nyimUIr3QUTaU9TCQCSigjTzOnG4FIkEp35CPDJg5KM0exD7ItE8TdabwIYwI5BZp\n7o1eJjoOUHf9PufZcgBY0mxaMYVwfKuKz1dq/e/34qGniFduZe0XPMBTUKQxuH6U\nR8gWlKTOl7i+AKT/uATDX22I1TTeKXf6ymGvn4jz52+dY/DfKxlxk88U70XiGAkv\ne+2du38CAwEAAaOB6TCB5jAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB\n/zAdBgNVHQ4EFgQUTROm0oXPNpj9ZT71J6U470BxkD4wHwYDVR0jBBgwFoAUTROm\n0oXPNpj9ZT71J6U470BxkD4wgYIGA1UdEQR7MHmCH2NvbmZpZy1zaWRlY2FyLWlu\namVjdG9yLXNlcnZpY2WCKGNvbmZpZy1zaWRlY2FyLWluamVjdG9yLXNlcnZpY2Uu\ncGxhdGZvcm2CLGNvbmZpZy1zaWRlY2FyLWluamVjdG9yLXNlcnZpY2UucGxhdGZv\ncm0uc3ZjMA0GCSqGSIb3DQEBCwUAA4IBAQBrKZxo1tPLdwn0mZN64mya8P6DsUnf\nsW2rnetsjc5kJYTV6p/8iov/yQPmsnN9bIZSc87wOTa6QF1fL0jlhWVvUz+9DZPv\nwfdYPvv31KQj+9WNquiaXKr/uELTmIYXeD1/ckJx9ZLE0WjUnfMkRxGxsIiB9JYu\n3WzOIqQV5czS2UubrKsvvGmtPpdfK7JJsWyk9Z4Hga78SEPNmErayYk3zjEB5rMK\n6+bVQIP00P/h89iwwjdBcL7DQDdociKQznL/L2Dm2rtUbkVMPi42WAn6xilaGmVJ\n643hBOsPIVBRtI0g2pquGutx00t0kw2LZCS+81rkz7t+9miiy+x7T72c\n — — -END CERTIFICATE — — -\n”{ } You can test your webhook logic by applying some manifest. In this case, I'll be using my sample manifest with –dry-run=server mode. What we expect is that we should not see any certificate error in the output: $ kubectl apply -f samples/ –dry-run=server Error from server: error when creating “examples/auto/pod1.yaml”: admission webhook “config-sidecar-injector-service.platform.svc” denied the request: could not find configs A Prometheus exporter for certificates focusing on expiration monitoring. Designed to monitor Kubernetes clusters inside, it can also be used as a standalone exporter.https://github.com/enix/x509-certificate-exporter Get notified before they expire: $ helm repo add enix https://charts.enix.io $ helm install x509-certificate-exporter enix/x509-certificate-exporter If you want to set Prometheus alerts, you can configure them manually; you can do it by following this page. There is one more problem in this setup. There is no logic embedded into the webhook that monitors the renewal of the certificates by the cert-manager. Because once the cert-manager renews the certificate, it updates the content of the secret with the details of the new certificates. So, we need to develop some intelligent logic that monitors these changes and updates the server's certificates without causing downtime. This is where all the magic happens! As you can see, we're using a Kubernetes secret to mount a certificate into our webhook. The webhook starts with a particular certificate. After some amount of time (10m), if you run the command below, you should get an error like the following: $ kubectl apply -f examples/auto/ --dry-run=server serviceaccount/auto-sa unchanged (server dry run) Error from server (InternalError): error when creating “examples/auto/pod1.yaml”: Internal error occurred: failed calling webhook “config-sidecar-injector-service.platform.svc”: Post “https://config-sidecar-injector-service.platform.svc:443/mutate?timeout=30s": x509: certificate has expired or is not yet valid: current time 2022–01–04T19:31:03Z is after 2022–01–04T19:25:35Z This is the problem that we are trying to solve. Your certificates are renewed and stored in the secret config-admission-webhook-tls, but your webhook still uses the expired one. So what do you do when this happens? Of course, the easiest solution is to restart your webhook. Thus, your webhook uses the renewed certificates after restarting. But you have to restart it continuously. This isn't cool. We use Kubernetes secrets to store our certificate and mount it into our webhook as a file. This means that when the secret we mounted is updated, we can see this update on the file system. So when our certificate secret is updated, we can update our certificate at runtime without restarting anything. In addition, we can use fsnotify to watch certificate files and update our HTTP server's certificate when any file update event happens on the file system. But still, there is a problem with fsnotify and Kubernetes. If we watch WRITE events with fsnotify we can't handle the updates because Kubernetes uses symlinks to mount a file. You can see the events we got are CHMOD and REMOVE. So we can watch REMOVE events to handle the certificate change instead of WRITE events. Please do not make the same mistake. See this article. The complete code for this solution is below. Also, certificate update logs happen when your certificate is renewed, and Kubernetes updates the mounted file. After these logs, we can test our webhook by connecting to its shell to see if it's currently using the certificate is up to date with the command below. You can test this by applying the following manifest in your cluster: So, run the commands below by looking at their certificate dates whether to match within the secret, which includes certificates: $ DOM="localhost" $ PORT="8080" $ openssl s_client -servername $DOM -connect $DOM:$PORT | openssl x509 -noout -dates As a result, we finally updated our certificate when the cert-manager updated it without any downtime. Now then, we can define short-lived certificates, which makes us more secure than creating long-lived certificates, e.g., ten years and, we automated the certificate update process. As you can see that we have used a bunch of awesome tools to end up having an automated, easy-to-use, and observable certificate management process for our Kubernetes workloads. With that, we also have used short-lived certificates for our production-grade Kubernetes workloads which is a good thing from the security perspective of the applications. We also monitored the whole process to avoid production outages that can happen by certificates expirations. We hope you enjoy the whole story, please stay tuned for the next blogs posts, and do not forget to subscribe and clap 😋🤝
1
Textadept 11.0 – A fast, minimalist, and remarkably extensible text editor
Features Textadept is a fast, minimalist, and remarkably extensible cross-platform text editor. Fast and Minimalist Textadept’s user interface is sleek and simple. Relentlessly optimized for speed and minimalism over the years, the editor consists of less than 2000 lines of C and C++ code, and less than 4000 lines of Lua code. Cross Platform Textadept runs on Windows, macOS and Linux. It also has a terminal version, which is ideal for work on remote machines. Remarkably Extensible Textadept is an ideal editor for programmers who want endless extensibility without sacrificing speed or succumbing to code bloat and featuritis. The editor gives you complete control over the entire application using the Lua programming language. Everything from moving the caret to changing menus and key bindings on-the-fly to handling core events is possible. Its potential is vast. Multiple Language Support Being a programmer’s editor, Textadept excels at editing source code. It understands the syntax and structure of more than 100 different programming languages and recognizes hundreds of file types. Textadept uses this knowledge to make viewing and editing code faster and easier. It can also compile and run simple source files. Unlimited Split Views Both the graphical version and the terminal version of Textadept support unlimited vertical and horizontal view splitting, even of the same file. Customizable Themes Textadept uses themes to customize its look and feel. It comes with built-in light, dark, and terminal themes. Code Autocompletion Not only can Textadept autocomplete words in files, but it can also autocomplete symbols for programming languages and display API documentation. Keyboard Driven Textadept can be entirely keyboard driven. The editor defines key bindings for many common actions. You can easily reassign existing bindings or create new ones. Keys may be chained together or grouped into language-specific keys or key modes. Self Contained Textadept’s binary packages are self-contained and need not be installed. No administrator privileges are required either. Comprehensive Manual Textadept comes with a comprehensive user manual in the application’s docs/ directory. It covers all of Textadept’s main features, including installation, usage, configuration, theming, scripting, and compilation. Exhaustive API Documentation Since Textadept is entirely scriptable with Lua, its API is heavily documented. This documentation is also located in docs/ and is the ultimate resource on scripting Textadept. (The editor’s Lua internals also provide abundant scripting examples.) Try It Yourself Learn even more about Textadept by downloading and trying it out yourself. If you’re not completely satisfied, contact us or e-mail me personally (orbitalquark att triplequasar.com). You may also fork the project, submit patches, or sponsor a feature. Textadept is 100% open source. Download Credits *Warning: nightly builds may be untested, may have bugs, and are the absolute cutting-edge versions of Textadept. Please exercise caution if using them in a production environment.
2
Show HN: Glass Editor – Java IDE, video conferencing, HTML editing
Try For Free p Your Development Accelerate Glass Editor Glass Editor is the fast, high quality development environment where we invent new tools to speed up how you build, debug, and deploy mainly Java programs, but we also have some great tools for web developmentand general software development in any language as well. Java Web Collaboration Media Debugging View the current value of variables while debugging instantly, right beside your code. Also view the stack trace of the current thread to easily jump between methods. Click on the next suspended thread button on the top toolbar to jump to the next suspended thread or hover over it to see the list of suspended threads. Autocomplete Our autocomplete shows you all the methods, fields, and variables available at a location sorted based on the surrounding context. It updates as you begin typing the name of themethod that you're looking for to allow you to select your target extremely quickly. Dependencies Download and include Maven dependencies declared in POM files from your approved Maven repositories. Tools to analyze and review your dependencies, such as extracting, recompiling and repackaging Maven dependencies and detecting and removing unused methods. Diff / Merge Diff and merge any files and commits or folders/trees. HTML Editing Edit HTML files and/or the text inside their rendered result side by side and watch the changes be applied in real time. File / Database Access and manipulate files and databases localy or remotely over ssh. Open files inside of zipped folders and even edit them as well. Not all types of remote operating systems may be supported and they may be required to have certain utilities installed to perform certain features. Images and SVGS Create, edit, and combine raster and vector images using our powerful collection of tools. Look out for video editing tools in the future! GIF editing is already partially supported. Discussions Track issues, create pull requests for your local git repos, and discuss ideas with your team or your customers in HTML or plain text with granular permissions for viewing, posting, and voting in a tree of groups. Git View your entire git repository and perform all of the common git commands. Video Video or voice conference with any groups that you are a member of.
4
Russians are the dumbest idiots on the planet
Russians are dumb.  Hopelessly stupid.  They are amateurs of the worst kind.  Ignoramuses on steroids.  Why? Well, for one, their so-called super-dooper biowarfare agent “Novichok” seems unable to kill anybody .  The Russians must have realized that.  This is why, when they tried to kill Skripal (after freeing him from jail) they put that Novichok thing all over the place:  on the bench near Salisbury, on Skripal’s door handle, even in some bottle of perfume a local addict found in the trash.  Probably all over the Skripal home, and this is why the Brits initially said that they would tear down the extremely toxic place (yet both the Skripal cat and their hamster survived – tells you how utterly useless that pretend biowarfare substance really was…). One would have thought that after this total cluster-bleep the Russians would have learned their lesson. But no.  They are clearly too dumb for that. So they decided to poison Alexei Navalnyi, a well-know “dissident”. Not only did the use exactly the same “Novichok” (or so says the German media), they allowed Navalnyi’s aircraft to make an emergency landing and the FSB did nothing to prevent an ambulance to bring Navalnyi to a hospital.  Apparently, the FSB does not even have the authority to prevent such urgent treatment of the man they want to kill.  Heck, they can even create a traffic jam to prevent Navalnyi from getting to a hospital. Even worse, these accursed Russki doctors gave Navalnyi atropine, the exact same substance the Germans gave him.  Makes me wonder if these doctors were not all CIA/BND agents trying to save Navalnyi’s life… Clearly, the FSB are also stupid: they can’t even get aircraft or doctors to obey them… But it gets worse.  In spite of the fact that Navalnyi has broken the terms of his suspended sentence and in spite of the fact that such a person cannot leave the country, these Russian imbeciles allowed him to fly to Germany while his body was still full of Novichok sloshing around. All the Russians needed to do to kill Navalnyi would have been to give him a heart attack using any one of the many untraceable agents in existence (say, potassium chloride). In despair, the clueless FSB might have caused Navalnyi do die in a car “crash”. But they can’t even do that.  Shame on you, FSB! And since Navalnyi is diabetic, killing him ought to be fantastically simple: just give him the wrong dose of meds and, voilà, bye-bye Navalnyi.  But not, these idiots decided to use the now infamous Novichok. Obviously, Russians are the dumbest, most incompetent, idiots on the planet! Russian special services and biological research institutes are especially known for the crass incompetence.  Proof They stole the Covid19 vaccine from the Brits, THEN they made it dangerous. Just like the so-called Russian hackers (another Russian category famous for its extremely low IQ!) could not even try to hack DNC computers or steal the 2016 election without leaving their Russian sounding aliases all over the place.  Heck, these hackers even worked only during Moscow time office hours. I am telling you – the Russians are fantastically stupid, the dumbest people on the planet. Especially their intel and security officers, their biowarfare specialists and their hackers.  Morons. All of them! Let’s all repeat it together: the Russians are dumb! the Russians are dumb! the Russians are dumb! That is very “highly likely”! The Essential Saker IV: Messianic Narcissism's Agony by a Thousand Cuts Order Now The Essential Saker III: Chronicling The Tragedy, Farce And Collapse of the Empire in the Era of Mr MAGA Order Now
5
Prophets, wizards and games that never end
A football game ends. It has a clear set of rules, a precise length and a mechanism to select the winner. Whichever team follows the rules to score the most goals by the end of the ninetieth minutes wins the game. Players are crowned champions in front of a fawning crowd, while losers slink away into the darkness. This is what the religious scholar James Carse calls a finite game in his book finite and infinite games . A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play. Infinite games are not about winners or losers. Instead, they are about the evolution of the game itself, flowing in whichever direction the game takes them. In football culture, for example, there is no winner per se but a continuous motion. Those involved work to keep the culture alive. The referee's whistle never signal’s the end. It's not to say that one type of game is better than the other. Instead, splitting life into a series of finite and infinite games can be a powerful way to look at the world. Titles like The New Climate War and The climate crisis is our third world war position climate change as a finite game. A game to be won. A problem to be conquered once and for all. In some ways, these warmongers are not wrong. We all know what losing means. However, there is no definition of victory — we play this game so that we can continue to play. Continuing to play means keeping the population alive, healthy and happy. The infinite game is the never-ending endeavour to use technology and policy to maintain the planet's ability to sustain us. An endeavour that needs to respond to our growing population, our growing quality of life and our increasing desire to chomp through fine steaks around the world. Charles Mann created two opposing camps of players in this infinite game in his book, Wizards and Prophets . William Vogt is a prophet because he sees the world as a petri-dish. A petri-dish that is finite and depleting. Humans are just the bacteria growing within it. At first, this bacteria grows slowly. Start with a speck of bacteria on the surface of the dish's culture. Give it a few minutes, and it divides into two. Another few minutes pass and there are four. Give it a few days and there is a six-foot film of bacteria smothering the entire surface of the planet. Prophets proclaim a similar fate for humanity. Books like Vogt’s Road to Survival , Dr. Erlichs’ population bomb or Limits to Growth reach the same conclusion - humanity is on a death march up the exponential curve to inevitable doom. Each step in growth means a step closer to the depletion of a critical resource. The world has a finite carrying capacity. An upper limit to what it can sustain. Greater populations and economic growth propel us towards that line. Prophets would like us to stay away from such lines. To turn our trajectories around or else fall victim to our biological nature of growth followed by collapse. Wizards see the same hurdles but continue to leap rather than turning away from them. When the prophets first ran the numbers a hundred years ago, our petri-dish was a sad sight. The growth in farm productivity was plateauing, there was no new land to farm and the supply of Europe’s primary fertiliser - bird excrement mined from Peruvian islands - was looking dangerously low. However, wizards saw it differently. The true problem was not that humankind risked surpassing natural limits, but that our species didn't know how to tap more than a fraction of the energy provided by nature. Harnessing more than was available at the time required new technology. For agriculture, this was the Haber–Bosch process. Instead of waiting for bacteria to slowly fix nitrogen from the air into plant friendly compounds in the soil, two chemists side stepped nature. By reacting freely available Nitrogen from the air with Hydrogen at industrial scales, fertiliser could be shipped across the world. The vanishing Peruvian bird faeces was no longer a limiting factor to growth. The magnitude of the change wrought by artificially fixed nitrogen is hard to grasp. Think of the deaths from hunger that have been averted, the opportunities granted to people who would otherwise not have had a chance to thrive, the great of works of art and science created by those who would have had to devote their lives to wringing sustenance from the earth... how many are owed to Haber and Bosch? How many would exist if this Wizardly triumph had not produced the nitrogen that filled their creators' childhood plates? The yield of wheat per unit of farmable land has more than doubled in the last fifty years. Alas, there is no such thing as a free meal. For each plate we farm, there's a wake of destruction left behind. Just over half of the fertiliser is consumed with the remainder spilling into lakes, rivers and what were once our boundless oceans. The result: a boom in microbes that suck the ocean dry of oxygen, and with it more wildlife than we would care to imagine. All this while, our population continues to increase. Wizards just see this as the next problem to solve. From making new breeds of plants to absorb more sunlight to creating factories that may one day feed our planet. There are plenty of moonshots in motion. Each of the cascading effects is yet another problem to solve. Wizards and prophets have one unifying belief: our ability to fight our nature. Whether by fighting humanity’s growth trajectory or by fighting nature’s ability to withstand it, there’s an implicit assumption that we are in charge of our own destiny. Charles Mann concludes his book with an annoying question: Are we special? Are we of nature or above it? Alas, the answer comes at the end of the game.
1
Transhumanism Leadership
p p Here’s what scientists learned about the effects of mental stress inflaming the gut and how we can use this valuable information.
4
SofGAN: A Portrait Image Generator with Dynamic Styling
A GAN-based image generator with explicit attribute controlling. Paper Code Recently, GANs have been widely used for portrait image generation. However, in the latent space learned by GANs, different attributes, such as pose, shape, and texture style, are generally entangled, making the explicit control of specific attributes difficult. To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space. The latent codes sampled from the two subspaces are fed to two network branches separately, one to generate the 3D geometry of portraits with canonical pose, and the other to generate textures. The aligned 3D geometries also come with semantic part segmentation, encoded as a semantic occupancy field (SOF). The SOF allows the rendering of consistent 2D semantic segmentation maps at arbitrary views, which are then fused with the generated texture maps and stylized to a portrait photo using our semantic instance-wise (SIW) module. Through extensive experiments, we show that our system can generate high quality portrait images with independently controllable geometry and texture attributes. The method also generalizes well in various applications such as appearance-consistent facial animation and dynamic styling. Overview First row: our decoupled representation allows explicit control over pose, shape and texture styles. Starting from the source image, we explicitly change it's head pose (2nd image), facial/hair contour (3rd image) and texture styles. Second row: interactive image generation from incomplete segmaps. We allow users to gradually add parts to the segmap and generate colorful images on-the-fly. Applications The first two videos demonstrate the regional style adjustment conditing on specify semantic segmentation map. One of the key features of our SIW-StyleGAN is semantic-level style controlling. Benefiting from the StyleConv blocks and style mixing training strategy, we could separately control the style for each semantic region. Video 1: Regional style adjustment Video 2: Style transfer Under our generation framework, we can generate free-viewpoint portrait images from geometric samples or real images by changing the camera pose. our SOF is trained with multi-view semantic segmentation maps, the geometric projection constraint between views is encoded in the SOF, which enables our method to keep shape and expression consistent when changing the viewpoint. Video 3a: Free view point rendering and shape morphing Video 3b: Free view point rendering and shape morphing we collect a video clip from Internet and generate segmentation maps for each frame with a pre-trained face parser. Our method can preserve texture style and shape consistency on various poses and expressions without any temporal regularization. Video 4: Animated video sequences Our method allow users to gradually add parts to the segmap and generate colorful images on-the-fly. Video 5a: Generation from drawing b An awesome online drawing app Wand, made by 影眸科技. Acknowledgments We thank Xinwei Li and Qiuyue Wang for dubbing the video, Zhixin Piao for comments and discussions, and Kim Seonghyeon and Adam Geitgey for sharing their StyleGAN2 implementation and face recognition code for our comparisons and quantity evaluation. Bibtex @article{sofgan, title={Sofgan: A portrait image generator with dynamic styling}, author={Chen, Anpei and Liu, Ruiyang and Xie, Ling and Chen, Zhang and Su, Hao and Yu, Jingyi}, journal={ACM Transactions on Graphics (TOG)}, volume={41}, number={1}, pages={1--26}, year={2022}, publisher={ACM New York, NY} }
1
Avail Job Consultancy Services for an Ideal Job Placement
by Lahn Hua Posted: Jul 10, 2021 In today’s day and age, there are many opportunities availablein varied sectors that offer multiple job openings. There is also an overwhelming number of applicants who are waiting to get accepted in reputed companies. The level of competition is quite astonishing which makes the process of landing a job even more fierce. Many people have dreams of working at the best of companies and with a job profile that satisfies their professional goals and helps them have a better skill set. Applicants endeavour to contribute to the company and broaden their horizon by pushing their boundaries and reimagining their capacities to improve and evolve. This is why a lot of people dream of working overseas with a reputed brand and fulfil these aspirations. An international company can offer them the platform to work with an eclectic group of people and a well-established brand which can surely expand their working process and improve their deliverables. Many candidates avail job consultancy services to gain more knowledge in this area. A piece of expert advice can offer immediate help as well as long-term solutions to take the right steps towards the job overseas. International companies mention specific requirements online that the candidates can access and understand if they are right for the job. There is also the matter of legalities and other integral processes that must be adhered to. To ensure these formalities are properly fulfilled and there is no oversight, job consultants can be contacted for guidance. Job consultancy services primarily help in finding the right job for the candidate - a profile that matches their interest area and suits them well. HCL Vietnam is an entity of HCL Technologies company that is a next-generation global technology brand. From technology products and services to invention and risk-taking, HCL is known for its path-breaking work culture and services. HCL is also known for its encouragement and support towards the workforce, customers, and the overall ecosystem that it functions in. The brand believes in collaborating, conceptualizing, and delivering creative solutions in the business landscape. There are many IT jobs in Vietnam and HCL Vietnam offers some exciting job openings for freshers (college graduates) as well as experienced applicants. Hi-tech careers of freshers can now kickstart at HCL Vietnam. Grab the opportunity and create a successful work graph. I'm driven towards all things related to the IT industry, focusing on the many developments and domains. Currently, I work at HCL Vietnam and create content in my free time.