labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
3
Tech and the fine art of complicity
What are the opportunities for technology in the arts – and what threats does it pose? Knight Foundation, which recently launched an open call for ideas to use technology to connect people to the arts , asked leaders in the field to answer that question. Here, Jer Thorp, a data artist and innovator-in-residence at the Library of Congress, answers with his own, provocative inquiry. As an artist who works with technology, my daily practice is punctuated with questions: Which natural language processing algorithm should I be using? How well will this cheap webcam function in low light? What exactly does that error message mean? Lately, though, I’ve been spending a lot of time dwelling on one question in particular: Specifically, how are the works that I’m creating, the methods that I’m using to create them, and the ways that I am presenting them to the world supporting the agendas of Silicon Valley and the adjacent power structures of government? Tech-based artist Zach Lieberman said in 2011 that “artists are the R&D department of humanity.” This might be true, but it’s not necessarily a good thing. In a field that is eager to demonstrate technical innovation, artists are often the first to uncover methodologies and approaches that, while intriguing, are fundamentally problematic. I learned this first hand after I released Just Landed in 2009, a piece that analyzed tweets to extract patterns of travel. While my intentions were good, I later learned that several advertisers saw the piece as a kind of instructional video for how personal data could be scraped, unbeknownst to users, from a Twitter feed. At no point in the development of Just Landed did I stop to consider the implications of the code I was writing, and how it might go on to be used. Unbeknownst to me, I was complicit in the acceleration of dangerous “big data” practices. Complicity can also come though the medium in which we present our work. In 2017, Jeff Koons partnered with Snapchat to produce an Augmented Reality art piece, which rendered Koons’ iconic structures in real-world locations, visibly only through a Snapchat Lens. Though Koons has never been afraid to embrace commercialism, this project forces the user to embrace it as well: to view the artwork, people need to purchase specific hardware, install Snapchat, and agree to a Terms of Service that has concerning implications on user privacy and creative ownership. It seems clear to me in my own practice that any artwork that requires a user to agree to commercial use of personal data is fundamentally dangerous. Google’s Art and Culture app uses face recognition to match images to artworks. Left: A police officer uses glasses enabled with facial recognition at Zhengzhou East Railway Station. Right: A matched face extracted from “Gongginori” by Kim Jun-geun. Screen shot by Jer Thorp. In January, Google added a feature to its heretofore unnoticed Arts and Culture app which allowed users to match their own selfies to artworks held by museums and galleries around the world. This project, though phenomenally popular, was rightly criticized for offering a depressingly thin experience for people of color, who are severely underrepresented in museum holdings. As well as being a prejudiced gatekeeper to the art experience, this tool seems to me to play another dangerous role: as a cheerleader for facial recognition. This playfully soft demonstration of face recognition is like using a taser to power a merry-go-round: it works, and it might be good PR, but it is certainly not matched to the real-world purpose of the technology. Five years ago, artist Allison Burtch coined the term ‘cop art’ to describe tech-based works that abuse the same unbalanced power structures they are criticizing. Specifically, she pointed to artworks that surveil the viewer, and asked the question: How is the artist any different from the police? As Facebook, Twitter and Google face a long-awaited moral reckoning, artists using technology must also be examining ourselves and our methods critically. How are we different from these corporations? How are we complicit? For more about the open call for ideas on art and technology, visit prototypefund.org . Email Yes, subscribe me to this newsletter.p Subscription Options
2
Canon’s flagship DSLR line will end with the EOS-1D X Mark III, eventually
By span Richard Lawler , p Dec 30, 2021, 11:48 PM UTC Comments Share this story When Canon revealed the EOS-1D X Mark III in January 2020, we proclaimed that the DSLR “still isn’t dead,” but that camera will mark the end of the line for a flagship model that some pro photographers still swear by to capture everything from sporting events to wild animals. An end to the production and development timeline of the EOS-1 is estimated as “within a few years.” CanonRumors points out an interview Canon’s chairman and CEO Fujio Mitarai gave this week to the Japanese newspaper  Yomiuri Shimbun (via Y.M. Cinema Magazine ). The piece highlight how high-end mirrorless interchangeable-lens cameras have taken market share digital single-lens reflex (DSLR) cameras previously dominated. In it, the CEO is quoted (in Japanese, which we’ve translated to English) saying “Market needs are rapidly moving toward mirrorless cameras. So accordingly, we’re increasingly moving people in that direction.” The article states that the Mark III is “in fact” the last model in Canon’s flagship EOS-1 series and that in a few years Canon will stop developing and producing its flagship DSLR cameras in favor of mirrorless cameras. However, despite what some headlines say, it doesn’t mean this is the end of Canon DSLRs (yet). While the article makes it plain that mirrorless cameras like Canon’s own EOS R3 represent the future of the segment, it also says that because of strong overseas demand, the company plans to continue making intro/mid-range DSLR cameras for the time being. As for the Mark III itself, while a new model is not around the corner, its estimated lifespan as an active product is still measured in years. In a statement given to The Verge, a Canon US spokesperson confirmed, “The broad details of Mr. Mitarai’s interview as described in the article are true. However, while estimated as “within a few years,” exact dates are not confirmed for the conclusion of development/termination of production for a flagship DSLR camera.” Comments
1
Florence Medieval Wine Windows Are Open Again to Serve All Kind of Things
Member-only story Regia Marinho p Follow 4 min read p Aug 10, 2020 -- Share Ideas from the past… From Florence, Italy. Bars and restaurants to serve food and drinks with socially distancing, they’re bringing back the tiny windows used during the 17th century plague. Wine windows, known locally as buchette del vino, are small hatches carved into the walls of over 150 buildings in Florence… Follow 17.6K Followers I write about ideas, technology, the future and inspire the world through art. Artist. Civil engineer. https://regiaart.com Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
2
An open source TTRPG town generator that outputs text ready to read out
Your browser lacks required capabilities. Please upgrade it or switch to another to continue. Loading…
35
Facebook’s Pushback: Stem the Leaks, Spin the Politics, Don’t Say Sorry
pWin McNamee/Getty Images The Facebook Files Facebook’s Pushback: Stem the Leaks, Spin the Politics, Don’t Say Sorry Chief Executive Mark Zuckerberg drove response to disclosures about company’s influence; sending deputies to testify in Congress pWin McNamee/Getty Images The day after former Facebook employee and whistleblower Frances Haugen went public in October, the company’s team in Washington started working the phones. Continue reading your article with a WSJ subscription Subscribe Now Already a subscriber? Sign In
3
The Cost of Digital Consumption
✖ p Carbon History Aatish Bhatia 01 p Behind Climate Change Geoffrey Litt, Seth Thompson 02 p of Planting One Trillion Trees Benjamin Cooley 03 p Christina Orieschnig 04 p of Digital Consumption Halden Lin, Aishwarya Nirmal, Shobhit Hathi, Lilian Liang 05 Editors’ Note In 2014, Google had to shark-proof their underwater cables. It turned out that sharks were fond of chewing on the high-speed cables that make up the internet. While these “attacks” are no longer an issue, they are a reminder that both the internet and sharks share the same home. Book → E-Book → It is difficult to ignore the environmental consequences of ordering a physical book on Amazon. But what happens when you download an E-Book instead? When you buy a book on Amazon, the environmental consequences are difficult to ignore. From the jet fuel and gasoline burned to transport the packages, to the cardboard boxes cluttering your house, the whole process is filled with reminders that your actions have tangible, permanent consequences. Download that book onto an e-reader, however, and this trail of evidence seems to vanish. The reality is, just as a package purchased on Amazon must be transported through warehouses and shipping centers to get to your home, a downloaded book must be transported from data centers and networks across the world to arrive on your screen. Book → E-Book → It is difficult to ignore the environmental consequences of ordering a physical book on Amazon. But what happens when you download an E-Book instead? And that’s the thing: your digital actions are only possible because of the physical infrastructure built to support them. Digital actions are physical ones. This means that Google has to be mindful of sharks. This also means that we should be mindful of the carbon emissions produced by running the world’s data centers and internet networks. According to one study, the Information and Communications Technology sector is expected to account for 3–3.6% of global greenhouse gas emissions in 2020 [1]. That’s more than the fuel emissions for all air travel in 2019, which clocked in at 2.5%. The answer to “why?” and “how?” may not be immediately obvious, but that’s not the fault of consumers. A well-designed, frictionless digital experience means that users don’t need to worry about what happens behind the scenes and, by extension, the consequences. This is problematic: the idea of “hidden costs” runs contrary to principles of environmental awareness. Understanding how these digital products and services work is a crucial first step towards addressing their environmental impact. Get updates from the Parametric Press Each type of digital activity produces different levels of emissions. The amount of carbon dioxide emitted by a particular digital activity is a factor of the quantity of information that needs to be loaded. More specifically, we can estimate emissions using the following formula: n n n bytes × \times p X X X KwH/byte × \times p Y Y Y g CO₂/KwH A byte is a unit for information. X = 6 × 1 0 − 1 1 X = 6 \times 10^{-11} X = 6 × 1 0 − 1 1 is the global average for energy consumed for transmitting one byte of data in 2015, as calculated by Aslan et al. (2017) [2]. Y = 7 0 7 Y = 707 Y = 7 0 7 represents the EPA’s U.S. national weighted average for grams of CO₂ emitted for electricity consumed. Ideally, this formula would also include the energy usage of the data source and your device, but these will vary across digital media providers, users, and activities. This formula therefore provides a reasonable lower bound for emissions. Emissions of Websites Each bar represents the carbon emitted when scrolling through a website for 60 seconds. Car distance equivalent is calculated using the fuel economy of an average car. each bar to show a preview clip of the scroll.Click Tap In order to load the Parametric Press article that you are currently reading, we estimate that 51 milligrams of CO₂ were produced. The same amount of CO₂ would be produced by driving a car 0.20 meters (based on the fuel economy of an average car, according to the EPA). These emissions are a result of loading data for the text, graphics, and visualizations that are then rendered on your device. The chart to the right below displays the carbon emitted when loading various websites and scrolling through each at a constant speed for 60 seconds (this scrolling may incur more loading, depending on the site). All data collection was done using a Chrome web browser under a 100mbps Wi-Fi connection. Clicking Tapping each bar in the chart will show a preview of that scroll, sped up to a 5 second clip. Emissions of Websites Each bar represents the carbon emitted when scrolling through a website for 60 seconds. Car distance equivalent is calculated using the fuel economy of an average car, according to the EPA. a bar to show a preview clip of the scroll. Click Tap As shown to the right above, loading websites like Google, which primarily show text, produces much lower emissions than loading websites like Facebook, which load many photos and videos to your device. Emissions of Audio Each bar represents the carbon emitted when listening to an audio clip for 60 seconds (no audio preview). Note: The amount of data loaded may represent more than a minute’s worth due to buffering. Let’s take a closer look at one common type of non-text media: audio. When you listen to audio on your device, you generally load a version of the original audio file that has been compressed into a smaller size. In practice, this size is often determined by the choice of bitrate, which refers to the average amount of information in a unit of time. Emissions of Audio Each bar represents the carbon emitted when listening to an audio clip for 60 seconds (no audio preview). Note: The amount of data loaded may represent more than a minute’s worth due to buffering. The NPR podcast shown in the visualization was compressed to a bitrate of 128 kilobits per second (there are 8 bits in a byte), while the song “Old Town Road”, retrieved from Spotify, was compressed to 256 kilobits per second. This means that in the one minute timespan that both audio files were played, roughly twice as much data needed to be loaded for “Old Town Road” than for “Digging into ‘American Dirt’”, which leads to the song having about twice as large of a carbon footprint. The fact that the song has greater carbon emissions is not a reflection on the carbon footprint of songs versus podcasts, but rather the difference in the bitrate of each audio file. These audio examples have lower carbon emissions than most of the multimedia websites shown earlier. Emissions of Video Each bar represents the carbon emitted when watching a video for 60 seconds at two different qualities, 360p and 1080p (to reduce the size of this article, there is only one 360p preview for both qualities). Note: like audio, videos are buffered, which means that playing the video may have loaded more than 60 seconds of content. Videos are a particularly heavy digital medium. The chart to the right below shows emissions for streaming different YouTube videos at two different qualities—360p and 1080p, for 60 seconds each. Emissions of Video Each bar represents the carbon emitted when watching a video for 60 seconds at two different qualities, 360p and 1080p (to reduce the size of this article, there is only one 360p preview for both qualities). Note: like audio, videos are buffered, which means that playing the video may have loaded more than 60 seconds of content. When you view a video at a higher quality, you receive a clearer image on your device because the video that you load contains more pixels. Pixels are units of visual information. In the chart, the number in front of the “p” for quality refers to the height, in pixels, of the video. This is why there are greater emissions for videos at 1080p than those at 360p: more pixels means more data loaded per frame. Bitrate also plays a role in video streaming. Here, bitrate refers to the amount of visual and audio data loaded to your device over some timespan. From the chart, it is clear that the “Old Town Road” music video has a higher bitrate than the 3Blue1Brown animation at both qualities. This difference could be attributed to a variety of factors, such as the frame rate, compression algorithm, and the producers’ desired video fidelity. In the examples provided, videos produced far more CO₂ than audio over the same time span. This is especially apparent when comparing the emissions for the audio of “Old Town Road” and its corresponding music video. Ads are another common form of digital content. Digital media providers often include advertisements as a method of generating revenue. When the European version of the USA Today website removed ads and tracking scripts to comply with GDPR (General Data Protection Regulation), the size of its website decreased from 5000 to 500 kilobytes with no significant changes to the appearance of the site. This means that the p. Not only does a video require loading both audio and visual data, but visual data is also particularly heavy in information. Notice that loading the website Facebook produced the most emissions, likely a result of loading multiple videos and other heavy data. Ads are another common form of digital content. Digital media providers often include advertisements as a method of generating revenue. When the European version of the USA Today website removed ads and tracking scripts to comply with GDPR (General Data Protection Regulation), the size of its website decreased from 5000 to 500 kilobytes with no significant changes to the appearance of the site. This means that the p. Media Emissions by Medium on the “Show Timeline” button to see a timeline of loading packets for each type of media. to scrub through and view cumulative emissions. Click TapHover Tap and drag Show Timeline In a lot of cases, when you stream content online, you don’t receive all of the information for that content at once. Instead, your device loads incremental pieces of the data as you consume the media. These pieces are called packets. In each media emission visualization, we estimated emissions based on the size and quantity of the packets needed to load each type of media. Click Tap the “Show Timeline” button at the bottom of the visualization to the right below to see a timeline breakdown of each type of media. Media Emissions by Medium on the “Show Timeline” button to see a timeline of loading packets for each type of media. to scrub through and view cumulative emissions. Click TapHover Tap and drag Show Timeline In this timeline breakdown, we can see that the way in which packets arrive for video and audio differs from the pattern for websites. When playing video and audio, packets tend to travel to your device at regular intervals. In contrast, the packets for websites are weighted more heavily towards the beginning of the timeline, but websites may make more requests for data as you scroll through and load more content. We’ve just seen how digital streams are made up of packets of data sent over the internet. These packets aren’t delivered by magic. Every digital streaming platform relies on a system of computers and cables, each part consuming electricity and releasing carbon emissions. When we understand how these platforms deliver their content, we can directly link our digital actions to the physical infrastructure that releases carbon dioxide. Let’s take a look at YouTube. With its 2 billion + users, YouTube is the prototypical digital streaming service. How does one of its videos arrive on your screen? YouTube Pipeline + Electricity Usage for 2016 Origin Data Centers 350 GWh placeholder 1,900 GWh placeholder 4,400 GWh placeholder 8,500 GWh placeholder 6,100 GWh placeholder Enough to power over U.S. homes for a year YouTube (2016) [Preist et al. 2017] 19.6 TWh placeholder 19.6 TWh placeholder 19.6 TWh placeholder ICT Sector (2020, projected) [Belkhir & Elmeligi 2017] 19.6 TWh placeholder 19.6 TWh placeholder 19.6 TWh placeholder ICT Sector (2010) [Belkhir & Elmeligi 2017] Videos are stored on servers called “data centers”: warehouses full of giant computers designed for storing and distributing videos. For global services, these are often placed around the world. YouTube’s parent company, Google, has 21 origin data centers strategically placed throughout 4 continents (North America, South America, Europe, and Asia). Let’s take a closer look at one of these origin data centers. This one is in The Dalles, Oregon, on the West Coast of the United States. For information to get from this data center to you, it first goes through Google’s own specialized data network to what they call Edge Points of Presence (POPs for short), which bring data closer to high traffic areas. There are three metro areas with POPs in this region: Seattle, San Francisco, and San Jose. Show worldwide POPs From these POPs, data is routed through smaller data centers that form the “Google Global Cache” (GGC). These data centers are responsible for storing the more popular or recently watched videos for users in a given area, ensuring no single data center is overwhelmed and service stays zippy. There are 22 in the region shown on the map. A more general term for this collection of smaller data centers is a Content Delivery Network (CDN for short). Show entire GGC In 2018, researchers from the University of Bristol used publically available data to estimate the energy consumption of each step of YouTube’s pipeline in 2016. Google does not disclose its data center energy consumption for YouTube traffic specifically. Therefore, Chris Preist, Daniel Schien and Paul Shabajee used the energy consumption numbers released for a similar service’s (Netflix) data centers to estimate YouTube’s data center energy consumption. They found that all data centers accounted for less than 2% of YouTube’s electricity use in 2016 [3]. Google doesn’t have their own network for communication between POPs and the Google Global Cache. For that, they use the internet. The internet is a global “highway of information” that allows packets of data to be transmitted as electrical impulses. A packet is routed from a source computer, through cables and intermediary computers, before arriving at its destination. In addition to the 550,000 miles of underwater cables that form the backbone of the internet, regions have their own land-based networks. Here’s (roughly) what the major internet lines of the west coast look like [4]. Perhaps not surprisingly, it resembles our interstate highway system. Preist et al. estimate that this infrastructure consumed approximately 1,900 Gigawatt-hours of electricity to serve YouTube videos in 2016 [3], enough to power 170,000 homes in the United States for a year, according to the EIA . The packets traveling across this information highway need “off-ramps” to reach your screen. The off-ramps that packets take are either “fixed line” residential networks (wired connections from homes to the internet) or cellular networks (wireless connections from cell phones to the internet). The physical infrastructure making up these two types of networks differ, and therefore have distinct profiles of energy consumption and carbon emissions. An estimated 88% of YouTube’s traffic went through fixed line networks (from your residential Cable, DSL, or Fiber-Optic providers), and this accounted for approximately 4,400 Gigawatt-hours of electricity usage [3]—enough to power over 400,000 U.S. homes. In comparison, only 12% of YouTube’s traffic went through cellular networks, but they were by far the most expensive part of YouTube’s content delivery pipeline, accounting for approximately 8,500 Gigawatt-hours of electricity usage—enough to power over 750,000 U.S. homes [3]. At over 10 times the electricity usage per unit of traffic, the relative inefficiency of cellular transmission is clear. Eventually, the video data reaches your device for viewing. While your device might not technically be part of YouTube’s content delivery pipeline, we can’t overlook the cost of moving those pixels. Devices accounted for an estimated 6,100 Gigawatt-hours of electricity usage [3]: that’s over half a million U.S. homes worth of electricity. In total, Preist et al.’s research estimated that YouTube traffic consumed 19.6 Tetrawatt-hours of electricity in 2016 [3]. Using the world emissions factor for electricity generation as reported by the International Energy Agency, they place the resulting carbon emissions at 10.2 million metric tons of CO₂ (offset to 10.1 after Google’s renewable energy purchases for its data center activities). YouTube emitted nearly as much CO₂ as a metropolitan area like Auckland, New Zealand did in 2016. Put in other words, 10.2 MtCO₂ is equivalent to the yearly footprint of approximately 2.2 million cars in the United States. YouTube’s monthly active user count has since increased by a minimum of 33% since 2016, (from 1.5 billion a year later in 2017 to over 2 billion last year in 2019). This means its current CO₂ emissions in 2020 could be even higher than Preist et al.’s 2016 prediction. If we assume that the emissions factor for electricity usage is similar for each part of the pipeline, we can get a rough idea of the carbon footprint profile of YouTube. However, it’s important to note that this breakdown isn’t necessarily representative of the entirety of the information and technology sector. A 2018 study by McMaster University researchers Lotfi Belkhir and Ahmed Elmeligi paints a surprisingly different picture for the sector as a whole [1]. To compare the two studies, we can group the “Internet”, “Residential Network”, and “Cellular Network” sections into an umbrella “Networks”. Belkhir and Elmeligi provide emission estimates for both 2010 (retrospective) and 2020 (prospective). Most surprising is the weight Data Centers and CDNs have in this breakdown. We can speculate that the relatively high bandwidth required to transfer videos as a medium contributes at least partially to the overweightedness of the “Networks” category for YouTube. In the same study, Belkhir and Elmeligi created two models to project the ICT sector’s emissions decades forward. Even their “unrealistically conservative” linear model put tech at a 6–7% share of GHG emissions in 2040, and their exponential model had tech reaching over 14%. What is being done about this? Aside from increasing the efficiency of each part of the pipeline and taking advantage of renewable energy, mindful design could also go a long way. In the context of digital streaming, Preist et al. point out that much of YouTube’s traffic comes from music videos, and a good number of those “views” are likely just “listens”. If this “listen” to “view” ratio were even just 10%, YouTube could have reduced its carbon footprint by about 117 thousand tons of CO₂ in 2016, just by intelligently sending audio when no video is required. That’s over 2.2 million gallons of gasoline worth of CO₂ in savings. Digital streaming is not the only instance where environmentally harmful aspects of technology are outside of public consciousness. Tech is rarely perceived as environmentally toxic, but here’s a surprising fact: Santa Clara County, the heart of “Silicon Valley”, has the most EPA classified “superfund” (highly polluted) sites in the nation. These 23 locations may be impossible to fully clean. Silicon Valley is primarily to blame.
2
New Approach to Identify Genetic Boundaries of Species Could Impact Policy
Evolutionary biologists have developed a new computational approach to genomic species delineation that improves upon current methods and could impact policy in the future. Evolutionary biologists model the process of speciation, which follows population formation, improving on current species delineation methods. By Story Keywords: Sciences, Biology, Research, News A new approach to genomic species delineation could impact policy and lend clarity to legislation for designating a species as endangered or at risk. The coastal California gnatcatcher is an unassuming little gray songbird that’s been at the epicenter of a legal brawl for nearly 28 years, ever since the U.S. Fish and Wildlife Service listed it as threatened under the Endangered Species Act. Found along the Baja California coast, from El Rosario, Mexico to Long Beach, Calif., its natural habitat is the rapidly declining coastal sagebrush that occupies prime, pristine real estate along the West Coast. When Polioptila californica was granted protection, the region’s real estate developers sued to delist it. Central to their argument, which was dismissed in a federal court, was whether it was an independent species or just another population of a more widely found gnatcatcher. This distinction would dictate its threatened status. Evolutionary biologists have developed a new computational approach to genomic species delineation that improves upon current methods and could impact similar policy in the future. This approach is based on the fact that in many groups of organisms it can be problematic to decide where one species begins and another ends. “In the past, when it was challenging to distinguish species based on external characters, scientists relied on approaches that diagnosed signatures in the genome to identify ‘breaks’ or ‘structure’ in gene flow indicative of population separation. The problem is this method doesn’t distinguish between two populations separated geographically versus two populations being two different species,” said Jeet Sukumaran, computational evolutionary biologist at San Diego State University and lead author of a study published May 13 in PloS Computational Biology. “Our method, DELINEATE, introduces a way to distinguish between these two factors, which is important because most of the natural resources management policy and legislature in our society rests on clearly defined and named species units.” Typically, scientists will use a range of different methods to identify boundaries between different species, including statistical analysis and  qualitative data to distinguish between population-level variation and species-level variation in their samples, to complete the classification of an organism. In cases where it is difficult to sort the variation between individuals into differences due to variation within a species, they often turn to genomic data based approaches for the answer. This is when scientists often use a model that generates an evolutionary tree relating different populations. Sukumaran and co-authors evolutionary biologists with the University of Michigan, Ann Arbor and with the University of Kansas, Lawrence add a second layer of information to this approach to explicitly model the actual speciation process. This allows them to understand how separate populations sometimes evolve into distinct species, which is the basis for distinguishing between populations and species in the data.L. Lacey KnowlesMark Holder "Our method allows researchers to make statements about how confident they are that two populations are members of the same species,” Holder said. “That is an advance over just making a best estimate of species assignments." Whether some of the population lineages in the sample are assigned to existing species or classified as entirely new species depends on two factors. One is the age of the population isolation events such as the splitting of an ancestral population into multiple daughter populations, which is how species are “born” in an extended process of speciation. The other is the rate of speciation completion, which is the rate at which the nascent or incipient species “born” from population splitting events develop into true full species. Organisms can look alike, Sukumaran said, “even though they are actually distinct species separated by many tens or hundreds of thousands or even millions of years of evolution.” “When rivers change course, when terrain changes, previously cohesive populations get fragmented, and the genetic makeup of the two separate populations, now each a population in their own right, can diverge,” Sukumaran said. “Eventually, one or both these populations may evolve into separate species, and may (or may not) already have reached this status by the time we look at them. “Yet individuals of these two populations may look identical to us based on their external appearances, as differences in these may not have had time to ‘fix’ in either population. This is when we turn to genomic data to help guide us toward deciding whether we are looking at two populations of the same species, or two separate species.” While scientists agree that it is critical to distinguish between populations and species boundaries in genomic data, there is not always a lot of agreement on how to go about doing it. “If you ask ten biologists, you will get twelve different answers,” Sukumaran said. With this framework, scientists can have a better understanding of the status of any species, but especially of multiple independent species that look alike. The work has implications far beyond real estate battles. Many fields of science and medicine depend on the accurate demarcation and identification of species, including ecology, evolution, conservation and wildlife management, agriculture and pest management, epidemiology and vector-borne disease management etc. These fields also intersect government, legislature and policy, with major implications for the day-to-day lives of broader human society. The DELINEATE model is a first step in a process that will need to be further refined. Funding for this research came from the National Science Foundation.
29
FDA authorizes use of Pfizer's Covid vaccine for 5- to 11-year-olds
toggle caption Jeff Kowalsky/AFP via Getty Images Jeff Kowalsky/AFP via Getty Images The Food and Drug Administration has authorized a Pfizer-BioNTech COVID-19 vaccine for children ages 5 to 11. This lower-dose formulation of the companies' adult vaccine was found to be safe and 90.7% effective in preventing symptomatic COVID-19. The agency acted Friday after a panel of independent scientists advising the FDA strongly supported the authorization on Tuesday. The FDA says the emergency use authorization is based on a study of approximately 4,700 children ages 5 to 11. "As a mother and a physician, I know that parents, caregivers, school staff, and children have been waiting for today's authorization. Vaccinating younger children against COVID-19 will bring us closer to returning to a sense of normalcy," said the FDA's acting commissioner, Dr. Janet Woodcock, in a statement. She went on to assure parents that the agency had rigorously evaluated the data and "this vaccine meets our high standards." Shots - Health News Here's the timeline for the kids COVID vaccine authorization The next step in the process before the vaccine can be released to pediatricians, pharmacies and other distribution points will be a meeting of an advisory panel to the Centers for Disease Control and Prevention next Tuesday. Depending on the outcome of that committee's deliberations, the CDC's director, Dr. Rochelle Walensky, would then have the final say on whether the vaccine can be used and in what circumstances. Once Walensky weighs in, children in this age group could, conceivably, begin to receive their first shot in early November. A dose of the Pfizer vaccine for young children contains one-third the amount of active ingredient used in the vaccine for those 12 years old and up. Children would receive a second dose 21 days or more after their first shot. The vaccine also differs from the existing formulation that teens and adults have been getting in that it can be stored in a refrigerator for up to 10 weeks — making it easier for private medical offices, schools and other locations to keep and administer the vaccine. Children ages 5 to 11 have accounted for approximately 9% of reported COVID-19 cases in the U.S. overall and currently account for approximately 40% of pediatric COVID-19 cases, according to Dr. Doran Fink, clinical deputy director of the FDA's Division of Vaccines and Related Products Applications. Currently, he says, the case rate of COVID-19 among children ages 5 to 11 is "near the highest" of any age group. Unvaccinated children who get COVID-19 can develop a serious complication called multisystem inflammatory syndrome, or MIS-C. More than 5,000 children have gotten the condition so far, according to Dr. Fiona Havers, a medical officer at the CDC who presented data this week to the FDA committee. Shots - Health News Booster shots are here. Take our quiz to see if you need one In deliberations at Tuesday's advisory panel, scientists and clinicians discussed the risks of side effects from the vaccine. Myocarditis and pericarditis — which can occur after viral infections, including COVID-19 — have been seen as rare side effects after vaccination with the two mRNA vaccines, Pfizer and Moderna, especially among young men. In the Pfizer-BioNTech study submitted to the FDA, there were no cases of myocarditis in the children studied. However, given that the highest risk for these rare side effects is among teen males, the agency assessed the risks and benefits of vaccinating younger children and concluded that the benefits of preventing hospitalization from COVID-19 outweigh the possible risks of the side effects. During Tuesday's advisory panel discussion, Capt. Amanda Cohn, a physician and medical officer with the CDC and also a voting member of the FDA committee, said that vaccinating young children against COVID-19 can save lives and keep kids out of the hospital. "We have incredible safety systems in place to monitor for the potential for myocarditis in this age group, and we can respond quickly," she said. "To me, the question is pretty clear. We don't want children to be dying of COVID, even if it is far fewer children than adults, and we don't want them in the ICU." Shots - Health News What you need to know about COVID boosters
4
Geometric Unity
The Theory of Geometric Unity is an attempt by Eric Weinstein to produce a unified field theory by recovering the different, seemingly incompatible geometries of fundamental physics from a general structure with minimal assumptions. On April 1, 2020, Eric prepared for release a video of his 2013 Oxford lecture on Geometric Unity as a special episode of The Portal Podcast. Eric has set April 1 as a new tradition, a day on which we are encouraged to say heretical things that we truly believe, in good faith, without fear of retribution from our employers, institutions, or communities. In this spirit, Eric released the latest draft of his Geometric Unity manuscript on April 1, 2021. An attempt is made to address a stylized question posed to Ernst Strauss by Albert Einstein regarding the amount of freedom present in the construction of our field theoretic universe. What really interests me is whether god had any choice in the creation of the world. Does something unprecedented happen when we finally learn our own source code? View the Full 2013 Oxford Lecture Transcript ➤ Download the GU Draft Add your email address to immediately get the latest version of Eric's Geometric Unity manuscript in your inbox as well as future updates. Email address: Manage Email Preferences
84
Topicctl – an easier way to manage Kafka topics
Today, we're releasing Topicctl , a tool for easy, declarative management of Kafka topics. Here's why. Apache Kafka is a core component of Segment’s infrastructure. Every event that hits the Segment API goes through multiple Kafka topics as it’s processed in our core data pipeline (check out this article for more technical details). At peak times, our largest cluster handles over 3 million messages per second. While Kafka is great at efficiently handling large request loads, configuring and managing clusters can be tedious. This is particularly true for topics, which are the core structures used to group and order messages in a Kafka cluster. The standard interfaces around creating and updating topics aren’t super user-friendly and require a thorough understanding of Kafka’s internals. If you’re not careful, it’s fairly easy to make accidental changes that degrade performance or, in the worst case, cause data loss. These issues aren’t a problem for Kafka experts dealing with a small number of fairly static topics. At Segment, however, we have hundreds of topics across our clusters, and they’re used by dozens of engineering teams. Moreover, the topics themselves are fairly dynamic. New ones are commonly added and existing ones are frequently adjusted to handle load changes, new product features, or production incidents. Previously, most of this complexity was handled by our SRE team. Engineers who wanted to create a new topic or update an existing one would file a ticket in Jira. An SRE team member would then read through the ticket, manually look at the actual cluster state, figure out the set of commands to run to make the desired changes, and then run these in production. From the requester’s perspective, the process was a black box. If there were any later problems, the SRE team would again have to get involved to debug the issues and update the cluster configuration. This system was tedious but generally worked. However, we recently decided to change the layout of the partitions in our largest topics to reduce networking costs. Dealing with this rollout in addition to the usual stream of requests for topic-related changes, each of which would have to be handled manually, would be too much for our small SRE team to deal with. We needed a better way to both apply bigger changes ourselves, while at the same time making it easier for others outside our team to manage their topics. We decided to develop tooling and associated workflows that would make it easier and safer to manage topics. Our desired end-state had the following properties: All configuration lives in git. Topics are defined in a declarative, human-friendly format. Changes are applied via a guided, idempotent process. Most changes are self-service, even for people who aren’t Kafka experts. It’s easy to understand the current state of a cluster. Many of these were shaped by our experiences making AWS changes with Terraform and Kubernetes changes with kubectl. We wanted an equivalent to these for Kafka! We developed a tool, topicctl, that addresses the above requirements for Kafka topic management. We recently open-sourced topicctl, and we’re happy to have others use it and work with us to make it better. The project README has full details on how to configure and run the tool, so we won’t repeat them here. We would, however, like to cover some highlights. As with kubectl, resources are configured in YAML, and changes are made via an apply process. Each apply run compares the state of a topic in the cluster to the desired state in the config. If the two are the same, nothing happens. If there are any differences, topicctl shows them to the user for review, gets approval, and then applies them in the appropriate way: Currently, our topic configs include the following properties: Retention time and other, config-level settings Replica placement strategy The last property is the most complex and was a big motivation for the project. The tool supports static placements as well as strategies that balance leader racks and/or ensure all of the replicas for a partition are in the same rack. We’re actively using these strategies at Segment to improve performance and reduce our networking costs. In addition to orchestrating topic changes, topicctl also makes it easier to understand the current state of topics, brokers, consumer groups, messages, and other entities in a cluster. Based on user feedback, and after evaluating gaps in existing Kafka tooling, we decided that a repl would be a useful interface to provide. We used the c-bata/go-prompt library and other components to create an easy, visually appealing way to explore inside a Kafka cluster: In addition to the repl, topicctl exposes a more traditional “get” interface for displaying information about cluster resources, as well as a “tail” command that provides some basic message tailing functionality. topicctl is implemented in pure Go. Most interaction with the cluster is through ZooKeeper since this is the often the easiest (or sometimes only) choice and is fully compatible with older Kafka versions. Some features, like topic tailing and lag evaluation use higher-level broker APIs. In addition to the go-prompt library mentioned above, the tool makes heavy use of samuel/go-zookeeper (for interaction with ZooKeeper) and the segment/kafka-go library (for interaction with brokers). At Segment, we’ve placed the topicctl configuration for all of our topics in a single git repo. When engineers want to create new topics or update existing ones, they open up a pull request in this repo, get the change approved and merged, and then run topicctl apply via Docker on a bastion host in the target account. Unlike the old method, most changes are self-service, and it’s significantly harder to make mistakes that will cause production incidents. Bigger changes still require involvement from the SRE team, but this is less frequent than before. In addition, the team can use topicctl itself for many of these operations, which is more efficient than the old tooling. Through better tooling and workflows, we were able to reduce the burden on our SRE team and provide a better user experience around the management of Kafka topics. Feel free to check out topicctl and let us know if you have any feedback!
2
Buyers vs. Users
Let’s say you sell a SaaS product, with software developers as the target audience. They get in touch to talk about testing the product in their organization. Even if they are the ones talking to Sales initially and drive a Proof of Concept, are they also the ones buying it? Not necessarily. In some organizations, software developers can make buying decisions, either directly by using the company credit card or through their manager, who purchases based on a majority vote. But in other organizations, the person buying the product is not the same as using it later on. The buyer might be someone from the purchasing department or a C-Level executive. In either case, have you tried talking to them in the same way you talk to developers about your product? I did initially, and it wasn’t fruitful. A CTO doesn’t care too much about all the great features we provide. First of all, a CTO wouldn’t directly use the product, but also, they have different metrics to determine success for themselves. Metrics to determine success? For instance, a developer might determine their success by the number of features delivered, the number of bugs fixed, or how many code reviews they performed. On the other hand, a CTO might be concerned with the efficiency of the entire developer organization or how to save costs. With these metrics in mind, can your product make an impact? What are the benefits that enable cost savings and an increase in efficiency? Now, how is that critical for sales conversations? When you talk to developers in, let’s say, an initial discovery call, you discuss their use cases and see if you might be able to help them. It’s a technical conversation. Then, you need to find out about the next steps and which ultimately makes the purchasing decision. In case somebody else makes the purchasing decision, find out as much as you can about the process. Will there be a meeting? Who else is attending? Can you talk to the decision-maker directly? For these discussions, you need to have your arguments and benefits at hand that matter to them. How can you save costs? How can you enable the team to be more efficient? What is your story? Even if you can’t talk to the decision-maker directly, offer to meet with the developers to prep them for the decision-making meeting with your prepared arguments. It’s common in today’s organizations that you have influencers and advocates in the organization who sell on your behalf. They like the company, the product and now put in the effort to get it purchased. The better you can support them in these discussions, the higher the chance to close a deal. Now, what are your benefits? Sit down and come up with the benefits necessary for the users. Then, do the same again for the buyer. What do they care about, and how can you make their lives easier? h3 Loading...
1
Updates for Supabase Functions
The question on everyone's mind - are we launching Supabase Functions? Well, it's complicated. Today we're announcing part of Functions - Supabase Hooks - in Alpha, for all new projects. We're also releasing support for Postgres Functions and Triggers in our Dashboard, and some timelines for the rest of Supabase Functions. Let's cover the features we're launching today before the item that everyone is waiting for: Supabase Functions. (Not to be confused with Supabase Functions!) Postgres has built-in support for SQL functions. Today we're making it even easier for developers to build PostgreSQL Functions by releasing a native Functions editor. Soon we'll release some handy templates! You can call PostgreSQL Functions with supabase-js using your project API [Docs]: Triggers are another amazing feature of Postgres, which allows you to execute any SQL code after inserting, updating, or deleting data. While triggers are a staple of Database Administrators, they can be a bit complex and hard to use. We plan to change that with a simple interface for building and managing PostgreSQL triggers. They say building a startup is like jumping off a cliff and assembling the plane on the way down. At Supabase it's more like assembling a 747 since, although we're still in Beta, thousands of companies depend on us to power their apps and websites. For the past few months we've been designing Supabase Functions based on our customer feedback. A recurring request from our customers is the ability to trigger their existing Functions. This is especially true for our Enterprise customers, but also Jamstack developers who develop API Functions directly within their stack (like Next.js API routes, or Redwood Serverless Functions). To meet these goals, we're releasing Supabase Functions in stages: (Note: Database Webhooks were previously called "Function Hooks") Today we're releasing Functions Hooks in ALPHA. The ALPHA tag means that it is NOT stable, but it's available for testing and feedback. The API will change, so do not use it for anything critical. You have been warned. Hooks? Triggers? Firestore has the concept of Function Triggers, which are very cool. Supabase Hooks are the same concept, just with a different name. Postgres already has the concept of Triggers, and we thought this would be less confusing . Database Webhooks allow you to "listen" to any change in your tables to trigger an asynchronous Function. You can hook into a few different events:INSERT, UPDATE, and DELETE. All events are fired after a database row is changed. Keen eyes will be able to spot the similarity to Postgres triggers, and that's because Database Webhooks are just a convenience wrapper around triggers. Supabase will support several different targets. If the target is a Serverless function or an HTTP POST request, the payload is automatically generated from the underlying table data. The format matches Supabase Realtime, except in this case you don't a client to "listen" to the changes. This provides yet another mechanism for responding to database changes. As with most of the Supabase platform, we leverage PostgreSQL's native functionality to implement Database Webhooks (previously called "Function Hooks"). To build hooks, we've released a new PostgreSQL Extension, pg_net, an asynchronous networking extension with an emphasis on scalability/throughput. In its initial (unstable) release we expose: The extension is (currently) capable of >300 requests per second and is the networking layer underpinning Database Webhooks. For a complete view of capabilities, check out the docs. pg_net allows you to make asynchronous HTTP requests directly within your SQL queries. After making a request, the extension will return an ID. You can use this ID to collect a response. You can cast the response to JSON within PostgreSQL: To build asynchronous behavior, we use a PostgreSQL background worker with a queue. This, coupled with the libcurl multi interface, enables us to do multiple simultaneous requests in the same background worker process. Shout out to Paul Ramsey, who gave us the implementation idea in pgsql-http. While we originally hoped to add background workers to his extension, the implementation became too cumbersome and we decided to start with a clean slate. The advantage of being async can be seen by making some requests with both extensions: Of course, the sync version waits until each request is completed to return the result, taking around 3.5 seconds for 10 requests; while the async version returns almost immediately in 1.5 milliseconds. This is really important for Supabase hooks, which run requests for every event fired from a SQL trigger - potentially thousands of requests per second. This is only the beginning! First we'll thoroughly test it and make a stable release, then we expect to add support for Database Webhooks is enabled today on all new projects. Find it under Database > Alpha Preview > Database Webhooks.
1
Python Ternary Operator
Python Ternary operators also called conditional expressions, are operators that evaluate something based on a binary condition. Ternary operators provide a shorthand way to write conditional statements, which makes the code more concise. In this tutorial, let us look at what is the ternary operator in Python and how we can use it in our code with some examples. Syntax of Ternary Operator The ternary operator is available from Python version 2.5 onwards, and the syntax is: [value_if_true] if [expression] else [value_if_false]  The simpler way is to represent ternary operator Syntax is as follows:      ifelse Note: The conditionals are an expression, not a statement. It means that you can’t use assignment statements or pass other statements within a conditional expression. Introduction to Python Ternary Operators Let us take a simple example to check if the student’s result is pass or fail based on the marks obtained. Using a traditional approach We are familiar with “ if-else ” conditions lets first write a program that prompts to enter student marks and returns either pass or fail based on the condition specified. marks = input('Enter the marks: ') if int(marks) >= 35: print("The result is Pass") else: print("The result is Fail") Output Enter the marks: 55 The result is Pass Now instead of using a typical if-else condition, let us try using ternary operators. Example of Ternary Operator in Python The ternary operator evaluates the condition first. If the result is true, it returns the value_if_true. Otherwise, it returns value_if_false The ternary operator is equivalent to the if-else condition. If you are from a programming background like C#, Java, etc., the ternary syntax looks like below. if condition: value_if_true else: value_if_true condition ? value_if_true : value_if_false However, In Python, the syntax of the ternary operator is slightly different. The following example demonstrates how we can use the ternary operator in Python. marks = input('Enter the marks: ') print("The result is Pass" if int(marks)>=35 else "The result is Fail") Output Enter the marks: 34 The result is Fail
1
Increased Errors in Auth0
Subscribe to updates History Increased errors in Auth0 Current Status: Resolved | Last updated at July 1, 2021, 21:11 UTC Affected environments: US-1 Preview, US-1 Production We have posted the final update to our detailed root-cause analysis for our service disruption on April 20, 2021. This update includes: – Completion of the remaining six action items https://cdn.auth0.com/blog/Detailed_Root_Cause_Analysis_(RCA)_4-2021.pdf History for this incident April 20, 2021 23:53 UTC April 20, 2021 23:05 UTC April 20, 2021 21:39 UTC April 20, 2021 21:23 UTC April 20, 2021 20:52 UTC April 20, 2021 20:38 UTC April 20, 2021 20:18 UTC April 20, 2021 20:05 UTC April 20, 2021 19:44 UTC April 20, 2021 19:25 UTC April 20, 2021 19:08 UTC April 20, 2021 18:54 UTC April 20, 2021 18:22 UTC April 20, 2021 18:06 UTC April 20, 2021 17:50 UTC April 20, 2021 17:31 UTC April 20, 2021 17:14 UTC April 20, 2021 16:41 UTC April 20, 2021 16:20 UTC April 20, 2021 15:59 UTC April 20, 2021 15:43 UTC Back to history
2
Re-Introducing Status Desktop – Beta v0.1.0. Available on Windows, Mac, Linux
Status Desktop returns as beta v0.1.0 to provide private, secure communication on Mac, Windows, and Linux. Some may have remembered the moment in November 2018 at DevconIV, when Status officially pulled the plug on the core contributor Slack and migrated entirely over to Status Desktop alpha. It was a massive moment for Status and the mission to provide private, secure communication no matter where you are – on the go with your smartphone or while at work at your desk. The conversation in Status Desktop was flowing and the product was improving each day - dogfooding at its finest. However, as many know, building infrastructure and privacy preserving tools from the ground up that adhere to the strict values and principles of the Status community, is challenging to say the least. With that, the team decided to prioritize the Status mobile app and put development of the desktop client on pause. Fast forward roughly one year, v1 of the Status mobile app is live in the App and Playstore, and development of desktop is back underway driven by a dedicated team. Today, Status officially re-introduces the Desktop client as beta v0.1.0 – marking a huge milestone in bringing decentralized messaging no matter where you are. The team, along with community contributors, have been steadily working on the client for the past few months. Developer builds have only been available via the Github repository for the team along with those willing to try out experimental software. With key features implemented and bringing the desktop messenger close to feature parity with the mobile app, it is officially ready for wider testing and can be downloaded for Mac, Windows, and Linux here While the Status Mobile App provides a holistic experience for communication and access to Ethereum with an integrated private messenger, Ethereum wallet, and Web3 DApp browser, the desktop app focuses initially on the messenger. It includes all the key features of the mobile application including private 1:1 chats, private group chats, community public channels, images in 1:1 and group chats, emoji reactions and more. Status Desktop is truly the first desktop messenger built in line with Status Principles. It leverages Waku for peer-to-peer messaging just like the mobile app. Waku, the fork of the Whisper protocol, aims to deliver the removal of centralized rent seeking intermediaries, decentralization of the network and removal of single points of failure, and censorship resistance. Desktop includes access to the Status Sticker Market as well as the ability to register and display stateofus.eth ENS usernames which both require SNT. For this reason, the wallet is available but is hidden from the UI unless toggled on under advanced settings (Profile >> Advanced >> Wallet Tab). Status Desktop has not undergone a formal security audit so wallet features are available at the risk of the user. The Web3 DApp browser is currently removed from the product entirely while the team builds some final features and can then conduct a security audit. Access to DApps is a crucial part of the Status user experience, but only when strong privacy and security guarantees can be made. As always, Status will not cut corners and jeopardize the security of the community. Both the wallet and DApp browser are under active development and a security audit is in the near future. When the browser is enabled, Status will provide a window into the world of Web3 and a communication layer to Ethereum. Current Status users can import their existing accounts and then easily sync their mobile and desktop apps for a seamless experience across devices. Simply import an account with a seed phrase and then head to the Profile Tab, Device settings, and then pair devices. Install Status Desktop and test it out for Mac, Windows, or Linux. **Status Desktop is un-audited, beta software and builds are available for testing. For this reason, upon installation, you will need to drag Status into the applications folder on your desktop and then manually open the app: right click >> Open Current status of builds: Web3 Dapp Browser Install Status Desktop and test it out for Mac, Windows, or Linux. Share
1
System-Level Packaging Tradeoffs
Leading-edge applications such as artificial intelligence, machine learning, automotive, and 5G, all require high bandwidth, higher performance, lower power and lower latency. They also need to do this for the same or less money. The solution may be disaggregating the SoC onto multiple die in a package, bringing memory closer to processing elements and delivering faster turnaround time. But the tradeoffs for making this happen are becoming increasingly complex, regardless of the advanced packaging approaches. In PCB-based systems, there are lots of devices on one board. Over time, as scaling allows for tighter integration and increased density, it is possible to bring everything together onto a single die. But this also is swelling the size of some designs, which actually can exceed reticle size even at the most advanced process nodes. But choosing what to keep on the same die, what components to put on other die, and how to package them all together presents a lot of choices. “2.5D and 3D is making this very complex, and people cannot always understand what’s going on,” said Rita Horner, senior staff, product manager at Synopsys. “How do you actually go about doing that? How do you do the planning? How do you do the exploration phase before knowing what to put in what, and in what configurations? Those are the challenges a lot of people are facing. And then, which tool do you use to optimally do your planning, do your design, do your implementation validation? There are so many different point tools in the market it’s unbelievable. It’s alphabet soup, and people are very confused. That’s the key. How do you actually make this implementation happen?” Fig. 1: Some advanced packaging options. Source: Synopsys Historically, many chipmakers have taken an SoC approach that also included some analog content and memory. This was likely a single, integrated SoC, where all of the IP was on a single die. “Some people still do that because for their business model, that makes the most sense,” said Michael White, product marketing director at Mentor, a Siemens Business. “But today, there are certainly others putting the CPU on one die, having that connected [TSMC] InFO-style, or with a silicon interposer connected up to an HBM memory. There may be other peripheral IP on that same silicon interposer to perform certain functions. Different people use different advanced packaging approaches, depending on where are they going. Are they going into a data center, where they can charge a real premium for that integrated package they’ve created? Or are they going in mobile, where they’re super cost-sensitive and power-sensitive? They might tend to drift more towards the InFO-style package approach.” Alternatively, if the die configuration contains relatively few die, and is relatively symmetric, TSMC InFo may be utilized where they’re connecting everything up through the InFo and don’t have a silicon interposer, with everything sitting on an organic substrate. There are many different configurations, he said. “It’s really driven by how many die do you have? Is the layout of the die or chiplets that you’re trying to connect up relatively few and a rather symmetric configuration? InFo style technology as well as other OSAT approaches are used versus a silicon interposer, if possible, because the silicon interposer is another chunk of silicon that you have to create/manufacture so there’s more cost. Your bill of materials has more cost with that, so you use it if your performance absolutely drives the need for it, or the number of chiplets that you have drive you in that direction. Another approach we’re seeing is folks trying to put some active circuitry in that silicon interposer, such as silicon photonics devices in the silicon interposer. Big picture, packaging technology being used is all over the map, and it’s really a function of the particular application, the market, and the complexity of the number of chiplets or dies, and so on.” System-level problems All of this speaks to the ongoing quest to continue to make gains in the system, which is manifesting in any number of architecture choices, compute engine decisions, in addition to packaging options. Paramount in the decision-making process is cost. “Chiplets are very interesting there. That seems to be the preferred solution now for dealing with the cost,” said Kristof Beets, senior director of technical product management at Imagination Technologies. “We’re seeing a lot of GPUs that have to become scalable by just copying multiple and connecting them together. Ramping up nodes is great in one way, but the cost isn’t so great. The question is really how to effectively create one 7nm or 5nm chip, and then just hook them together with whatever technique is chosen to ideally double the performance. Engineering groups are looking to do this to create multiple performances out of just a single chip investment.” Here, the question as to whether you know exactly what application you will be creating your chip for should be posed first. “This market is fast-moving, so you’re never quite sure that requirements which were set for the application when you started designing the chip will be the same at the end of the design and verification cycle,” said Aleksandar Mijatovic, design engineer at Vtool. “That is the first concern, which has to give you some push towards including more features than you actually need at the time. Many companies will try to look forward and get ready for a change in standards, change in protocols, change in speeds in order not to lose that one year of market which will arrive, when other components in that application get upgraded, and they are just put on the market something, which is based on a one-year-old standard.” Other issues must be paid attention to. “Full design, full verification plus mask making, and manufacturing is quite an expensive process so something you may think as too much logic might be there just because it turned out to be cheaper than to produce two or three flavors of one chip doing full verification, full masks and the whole packaging lines,” Mijatovic said. “Sometimes it’s just cheaper not to use some features than to manufacture a lot of them similar to when AMD came out with dual core processors which were quad cores, just with two cores turned off because it was already set up and nobody cared to pay for the expense of shrinking. A lot comes down to architecture, market research, and bean counting.” When it comes to chiplets, from the perspective of the verification domain, the biggest challenge for pre-silicon verification is that there might be more complexity (i.e., a bigger system), at least potentially, and more interfaces and package-level fabric to verify, Sergio Marchese, technical marketing manager at OneSpin suggested. “On the other hand, if you have a bunch of chiplets fabricated on different technology nodes, those different nodes should not affect pre-silicon verification. One thing that is not clear is: if you figure out that there is something wrong, not with a specific chiplet, but with their integration, what’s the cost for a ‘respin?’” One-off solutions Another aspect to advanced packaging approaches today is that many are unique to a single design, and while one company may be building the most advanced chip, they don’t get enough out of scaling in terms of power and performance. They may get the density, but they don’t get the benefits that they used to out of scaling. They turn to packaging to get extra density with an architectural change. While this is possible for leading-edge chipmakers, what will it take for mainstream chipmakers to access this kind of approach for highly complex and integrated chips, chiplets, and SoCs? This requires a very thorough understanding of the dynamics of how these systems are getting connected together. “Typically, the architect, chip designers, package designers, PCB designers all operate independent of each other, sometimes sequentially, in their own silos,” said Synopsys’ Horner. “But die-level complexities are increasing so much that they no longer can operate independently of each other because of the complexity. If they really want to make multi-die integration go mainstream, be affordable, more reliable, with faster turnaround time to get to the market, there needs to be a more collaborative environment where all these different disciplines, individuals actually can work together from early stages of the design to implementation, to validation, and even to the manufacturing. If there is a new next-generation iteration happening, it would be nice to have a platform to go back to and learn from to be able to further optimize for the next generation without going back to paper and pencil, which a lot of people are using to do their planning and organizing before they start digging in.” However, there is no platform to allow all these different disciplines to collaborate with each other and to learn from each other. “This collaborative environment would allow people to even go back to the drawing board when they realize they must disaggregate the die because it’s getting too large,” Horner said. “And because the die has to be disaggregated, additional I/O must be added. But what type is best? If the I/O is put on one side of the chip versus the other, what’s the impact of the substrate layer design. What that means translates to the package and the board level, where the balls are placed in the package, versus the C4 bumps in the substrate, or the micro bumps in a die.” She suggested that the ideal situation is to have a common unified platform to bring all the information into the simulation environment. “You could bring the information from the DDR process, or from the HBM, or the CMOS technology where the CPUs may be sitting, and then have enough information brought in to extract the parasitics. And then you can use simulation to make sure the wiring that you’re doing, the spacing and the width of traces that you’re using for your interconnect, or the shielding you’re doing, are going to be able to meet the performance requirement. It is no different than past approaches, but the complexity is getting high. In the past when you did a multi die in the package, there were very few traces going between every part. Now, with the HBM you have thousands just for one HBM connection. It’s very complex. That’s why we are seeing silicon interposers enabling the interconnect, because it allows the fine granularity of the width and spaces that are needed for this level of density.” Looking past Moore While there may be more attention being paid to advanced “More Than Moore” approaches, Mentor’s White said there are plenty of engineering teams still following Moore’s Law. They are building an SoC with GPUs and memory on a single die. “We still see that, but we definitely see folks who also are looking at building that assembly with multiple die,” Mentor’s White said. “Today, that might include a CPU sitting next to a GPU connected up to HBM memory, and then some series of other supporting IP chiplets that may be high speed interfaces, or RF, to Bluetooth or something else. We certainly see lots of companies talking about that approach, as well, of having multiple chiplets, including a GPU, and all of this assembly — and then using that as an easier way to then substitute in or out some of those supporting chiplets differently for different markets. In one particular market, you’re going to have RF and whatever. Another market may demand a slightly different set of supporting chiplets on that SoC, maybe more memory, maybe less memory, for whatever that marketplace needs. Or maybe I’m trying to intercept different price points. For the top of the line, I’ve got tons of HBM and a larger processor. Then, for some lower-level technology node, I’ve got a much more modest memory and a smaller GPU because the market pricing would not support that first one.” Other must-have requirements for package design include being able to understand all the different ways you can attach chips to package designs, whether it’s wirebond, flip-chip, stacking or embedding. “You have to have a tool that understands the intricacies of that cross section of the design,” said John Park, product management director for IC packaging and cross-platform solutions at Cadence. Park noted what is often overlooked is a connectivity use model. “This is important because the chip designer may use something, where they take the RTL and netlist it to Verilog. And that’s the connectivity. Board people use schematics for their connectivity. Then, packaging people sit somewhere in the middle, and for a lot of the connectivity, they have the flexibility to assign the I/Os based on better routing on the board level. They need the ability to drive the design with partial schematic, and the flexibility to create their own on-the-fly connectivity to the I/O that are somewhat flexible. They need the ability to work in spreadsheets. It’s not a single source for the connectivity, but it can be, and some people would like it that way. But it’s more important to have a really flexible connectivity model that allows you to drive the schematics, spreadsheets, build connectivity on-the-fly,” he said. Park added that it’s important to have tight integration with mask-level sign-off tools to improve routing, with specific knowledge of metal fill and RDL routing to create higher yielding designs. “The most important aspect is that it be a traditional BGA tool, but with the ability to integrate with mask-level physical verification tools for DRC and LVS. So I can take the layout, point to a rule deck in my verification tool, and any errors are fed back into the layout so I can correct those. That’s an important flow for people who are extending beyond BGA into some of these fan-out wafer level packages,” Park concluded. Fig. 2: Basic chiplet concept. Source: Cadence
4
How to build the next great startup with remote work, with Andreas Klinger
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
How the Internet Works
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
4
How close is nuclear fusion?
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
318
Leak reveals $2T of possibly corrupt US financial activity
Thousands of documents detailing $2 trillion (£1.55tn) of potentially corrupt transactions that were washed through the US financial system have been leaked to an international group of investigative journalists. The leak focuses on more than 2,000 suspicious activity reports (SARs) filed with the US government’s Financial Crimes Enforcement Network (FinCEN). Banks and other financial institutions file SARs when they believe a client is using their services for potential criminal activity. However, the filing of an SAR does not require the bank to cease doing business with the client in question. The documents were provided to BuzzFeed News, which shared them with the International Consortium of Investigative Journalists. The documents are said to suggest major banks provided financial services to high-risk individuals from around the world, in some cases even after they had been placed under sanctions by the US government. According to the ICIJ the documents relate to more than $2tn of transactions dating from between 1999 and 2017. One of those named in the SARs is Paul Manafort, a political strategist who led Donald Trump’s 2016 presidential election campaign for several months. He stepped down from the role after his consultancy work for former Ukrainian president Viktor Yanukovych was exposed, and he was later convicted of fraud and tax evasion. According to the ICIJ, banks began flagging activity linked to Manafort as suspicious beginning in 2012. In 2017 JP Morgan Chase filed a report on wire transfers worth over $300m involving shell companies in Cyprus that had done business with Manafort. The ICIJ said Manafort’s lawyer did not respond to an invitation to comment. A separate report details over $1bn in wire transfers by JP Morgan Chase that the bank later came to suspect were linked to Semion Mogilevich, an alleged Russian organised crime boss who is named on the FBI’s top 10 most wanted list. A JP Morgan Chase spokesperson told the BBC: “We follow all laws and regulations in support of the government’s work to combat financial crimes. We devote thousands of people and hundreds of millions of dollars to this important work.” According to BBC Panorama, the British bank HSBC allowed a group of criminals to transfer millions of dollars from a Ponzi scheme through its accounts, even after it had identified their fraud. HSBC said in a statement: “Starting in 2012, HSBC embarked on a multi-year journey to overhaul its ability to combat financial crime across more than 60 jurisdictions.” It added: “HSBC is a much safer institution than it was in 2012.” In a statement released earlier this month FinCEN condemned the disclosure of the leaked documents and said it had referred the matter to the US Department of Justice. “The Financial Crimes Enforcement Network is aware that various media outlets intend to publish a series of articles based on unlawfully disclosed suspicious activity reports (SARs), as well as other sensitive government documents, from several years ago,” it stated. “As FinCEN has stated previously, the unauthorised disclosure of SARs is a crime that can impact the national security of the United States, compromise law enforcement investigations, and threaten the safety and security of the institutions and individuals who file such reports.”
1
The 1968 Sunday Times Golden Globe Race
Robin Knox-Johnston finishing his circumnavigation of the world in Suhaili as the winner of the Golden Globe Race The b was a non-stop, single-handed, round-the-world yacht race, held in 1968–1969, and was the first round-the-world yacht race. The race was controversial due to the failure of most competitors to finish the race and because of the apparent suicide of one entrant; however, it ultimately led to the founding of the BOC Challenge and Vendée Globe round-the-world races, both of which continue to be successful and popular. The race was sponsored by the British Sunday Times newspaper and was designed to capitalise on a number of individual round-the-world voyages which were already being planned by various sailors; for this reason, there were no qualification requirements, and competitors were offered the opportunity to join and permitted to start at any time between 1 June and 31 October 1968. The Golden Globe trophy was offered to the first person to complete an unassisted, non-stop single-handed circumnavigation of the world via the great capes, and a separate £5,000 prize was offered for the fastest single-handed circumnavigation. Nine sailors started the race; four retired before leaving the Atlantic Ocean. Of the five remaining, Chay Blyth, who had set off with absolutely no sailing experience, sailed past the Cape of Good Hope before retiring; Nigel Tetley sank with 1,100 nautical miles (2,000 km) to go while leading; Donald Crowhurst, who, in desperation, attempted to fake a round-the-world voyage to avoid financial ruin, began to show signs of mental illness, and then committed suicide; and Bernard Moitessier, who rejected the philosophy behind a commercialised competition, abandoned the race while in a strong position to win and kept sailing non-stop until he reached Tahiti after circling the globe one and a half times. Robin Knox-Johnston was the only entrant to complete the race, becoming the first man to sail single-handed and non-stop around the world. He was awarded both prizes, and later donated the £5,000 to a fund supporting Crowhurst's family. Long-distance single-handed sailing has its beginnings in the nineteenth century, when a number of sailors made notable single-handed crossings of the Atlantic. The first single-handed circumnavigation of the world was made by Joshua Slocum, between 1895 and 1898, [1] and many sailors have since followed in his wake, completing leisurely circumnavigations with numerous stopovers. However, the first person to tackle a single-handed circumnavigation as a speed challenge[ p ] was Francis Chichester, who, in 1960, had won the inaugural Observer Single-handed Trans-Atlantic Race (OSTAR). [2] In 1966, Chichester set out to sail around the world by the clipper route, starting and finishing in England with a stop in Sydney, in an attempt to beat the speed records of the clipper ships in a small boat. His voyage was a great success, as he set an impressive round-the-world time of nine months and one day – with 226 days of sailing time – and, soon after his return to England on 28 May 1967, was knighted by Queen Elizabeth II. [3] Even before his return, however, a number of other sailors had turned their attention to the next logical challenge – a non-stop single-handed circumnavigation of the world.[ p ] In March 1967, a 28-year-old British merchant marine officer, Robin Knox-Johnston, realised that a non-stop solo circumnavigation was "about all there's left to do now". Knox-Johnston had a 32 foot (9.8 m) wooden ketch, Suhaili , which he and some friends had built in India to the William Atkin Eric design; two of the friends had then sailed the boat to South Africa, and in 1966 Knox-Johnston had single-handedly sailed her the remaining 10,000 nautical miles (11,500 mi; 18,500 km) to London. [4] Knox-Johnston was determined that the first person to make a single-handed non-stop circumnavigation should be British, and he decided that he would attempt to achieve this feat. To fund his preparations he went looking for sponsorship from Chichester's sponsor, the British Sunday Times . The Sunday Times was by this time interested in being associated with a successful non-stop voyage but decided that, of all the people rumoured to be preparing for a voyage, Knox-Johnston and his small wooden ketch were the least likely to succeed. Knox-Johnston finally arranged sponsorship from the Sunday Mirror . [5] [6] Several other sailors were interested. Bill King, a former Royal Navy submarine commander, built a 42 foot (12.8 m) junk-rigged schooner, Galway Blazer II, designed for heavy conditions. He was able to secure sponsorship from the Express newspapers. John Ridgway and Chay Blyth, a British Army captain and sergeant, had rowed a 20 foot (6.1 m) boat across the Atlantic Ocean in 1966. They independently decided to attempt the non-stop sail, but despite their rowing achievement were hampered by a lack of sailing experience. They both made arrangements to get boats, but ended up with entirely unsuitable vessels, 30 foot (9.1 m) boats designed for cruising protected waters and too lightly built for Southern Ocean conditions. Ridgway managed to secure sponsorship from The People newspaper. [7] One of the most serious sailors considering a non-stop circumnavigation in late 1967 was the French sailor and author Bernard Moitessier. Moitessier had a custom-built 39 foot (11.9 m) steel ketch, i, named after Slocum, in which he and his wife Françoise had sailed from France to Tahiti. They had then sailed her home again by way of Cape Horn, simply because they wanted to go home quickly to see their children. He had already achieved some recognition based on two successful books which he had written on his sailing experiences. However, he was disenchanted with the material aspect of his fame – he believed that by writing his books for quick commercial success he had sold out what was for him an almost spiritual experience. He hit upon the idea of a non-stop circumnavigation as a new challenge, which would be the basis for a new and better book. [8] [9] By January 1968, word of all these competing plans was spreading. The i, which had profited to an unexpected extent from its sponsorship of Chichester, wanted to get involved with the first non-stop circumnavigation, but had the problem of selecting the sailor most likely to succeed. King and Ridgway, two likely candidates, already had sponsorship, and there were several other strong candidates preparing. "Tahiti" Bill Howell, an Australian cruising sailor, had made a good performance in the 1964 i, Moitessier was also considered[ p ] a strong contender, and there may have been other potential circumnavigators already making preparations.[ p ] The route of the Golden Globe Race. The Sunday Times did not want to sponsor someone for the first non-stop solo circumnavigation only to have them beaten by another sailor, so the paper hit upon the idea of a sponsored race, which would cover all the sailors setting off that year. To circumvent the possibility of a non-entrant completing his voyage first and scooping the story, they made entry automatic: anyone sailing single-handed around the world that year would be considered in the race.[ p ] This still left them with a dilemma in terms of the prize. A race for the fastest time around the world was a logical subject for a prize, but there would obviously be considerable interest in the first person to complete a non-stop circumnavigation, and there was no possibility of persuading the possible candidates to wait for a combined start. The Sunday Times therefore decided to award two prizes: the Golden Globe trophy for the first person to sail single-handed, non-stop around the world; and a £5,000 prize (equivalent to £92,000 in 2021) for the fastest time. [10] This automatic entry provision had the drawback that the race organisers could not vet entrants for their ability to take on this challenge safely. This was in contrast to the i, for example, which in the same year required entrants to complete a solo 500-nautical mile (930 km) qualifying passage. [11] The one concession to safety was the requirement that all competitors must start between 1 June and 31 October, in order to pass through the Southern Ocean in summer. [12] To make the speed record meaningful, competitors had to start from the British Isles. However Moitessier, the most likely person to make a successful circumnavigation, was preparing to leave from Toulon, in France. When the i went to invite him to join the race, he was horrified, seeing the commercialisation of his voyage as a violation of the spiritual ideal which had inspired it. A few days later, Moitessier relented, thinking that he would join the race and that if he won, he would take the prizes and leave again without a word of thanks. In typical style, he refused the offer of a free radio to make progress reports, saying that this intrusion of the outside world would taint his voyage; he did, however, take a camera, agreeing to drop off packages of film if he got the chance. [13] The race was announced on 17 March 1968, by which time King, Ridgway, Howell (who later dropped out), Knox-Johnston, and Moitessier were registered as competitors. Chichester, despite expressing strong misgivings about the preparedness of some of the interested parties, was to chair the panel of judges. [10] Four days later, British electronics engineer Donald Crowhurst announced his intention to take part. Crowhurst was the manufacturer of a modestly successful radio navigation aid for sailors, who impressed many people with his apparent knowledge of sailing. With his electronics business failing, he saw a successful adventure, and the attendant publicity, as the solution to his financial troubles – essentially the mirror opposite of Moitessier, who saw publicity and financial rewards as inimical to his adventure. [14] Crowhurst planned to sail in a trimaran. These boats were starting to gain a reputation, still very much unproven, for speed, along with a darker reputation for unseaworthiness; they were known to be very stable under normal conditions, but extremely difficult to right if knocked over, for example by a rogue wave. Crowhurst planned to tackle the deficiencies of the trimaran with a revolutionary self-righting system, based on an automatically inflated air bag at the masthead. He would prove the system on his voyage, then go into business manufacturing it, thus making trimarans into safe boats for cruisers. [15] By June, Crowhurst had secured some financial backing, essentially by mortgaging the boat, and later his family home. Crowhurst's boat, however, had not yet been built; despite the lateness of his entry, he pressed ahead with the idea of a custom boat, which started construction in late June. Crowhurst's belief was that a trimaran would give him a good chance of the prize for the fastest circumnavigation, and with the help of a wildly optimistic table of probable performances, he even predicted that he would be first to finish – despite a planned departure on 1 October. [16] The start (1 June to 28 July) Given the design of the race, there was no organised start; the competitors set off whenever they were ready, over a period of several months. On 1 June 1968, the first allowable day, John Ridgway sailed from Inishmore, Ireland, in his weekend cruiser i. Just a week later, on 8 June, Chay Blyth followed suit – despite having absolutely no sailing experience. On the day he set sail, he had friends rig the boat i for him and then sail in front of him in another boat to show him the correct manoeuvres. [17] Knox-Johnston got underway from Falmouth soon after, on 14 June. He was undisturbed by the fact that it was a Friday, contrary to the common sailors' superstition that it is bad luck to begin a voyage on a Friday. Suhaili, crammed with tinned food, was low in the water and sluggish, but the much more seaworthy boat soon started gaining on Ridgway and Blyth. [18] It soon became clear to Ridgway that his boat was not up to a serious voyage, and he was also becoming affected by loneliness. On 17 June, at Madeira, he made an arranged rendezvous with a friend to drop off his photos and logs, and received some mail in exchange. While reading a recent issue of the i that he had just received, he discovered that the rules against assistance prohibited receiving mail – including the newspaper in which he was reading this – and so he was technically disqualified. While he dismissed this as overly petty, he continued the voyage in bad spirits. The boat continued to deteriorate, and he finally decided that it would not be able to handle the heavy conditions of the Southern Ocean. On 21 July he put into Recife, Brazil, and retired from the race. [19] Even with the race underway, other competitors continued to declare their intention to join. On 30 June, Royal Navy officer Nigel Tetley announced that he would race in the trimaran he and his wife lived aboard. He obtained sponsorship from Music for Pleasure, a British budget record label, and started preparing his boat, Victress, in Plymouth, where Moitessier, King, and Frenchman Loïck Fougeron were also getting ready. Fougeron was a friend of Moitessier, who managed a motorcycle company in Casablanca, and planned to race on Captain Browne, a 30 foot (9.1 m) steel gaff cutter. Crowhurst, meanwhile, was far from ready – assembly of the three hulls of his trimaran only began on 28 July at a boatyard in Norfolk. [20] [21] [22] Attrition begins (29 July to 31 October) Cape Town and the Cape Peninsula, with the Cape of Good Hope on the bottom right Blyth and Knox-Johnston were well down the Atlantic by this time. Knox-Johnston, the experienced seaman, was enjoying himself, but i had problems with leaking seams near the keel. However, Knox-Johnston had managed a good repair by diving and caulking the seams underwater. [23] Blyth was not far ahead, and although leading the race, he was having far greater problems with his boat, which was suffering in the hard conditions. He had also discovered that the fuel for his generator had been contaminated, which effectively put his radio out of action. On 15 August, Blyth went in to Tristan da Cunha to pass a message to his wife, and spoke to crew from an anchored cargo ship, Gillian Gaggins. On being invited aboard by her captain, a fellow Scot, Blyth found the offer impossible to refuse and went aboard, while the ship's engineers fixed his generator and replenished his fuel supply.[ p ] By this time he had already shifted his focus from the race to a more personal quest to discover his own limits; and so, despite his technical disqualification for receiving assistance, he continued sailing towards Cape Town. His boat continued to deteriorate, however, and on 13 September he put into East London. Having successfully sailed the length of the Atlantic and rounded Cape Agulhas in an unsuitable boat, he decided that he would take on the challenge of the sea again, but in a better boat and on his own terms. [24] Despite the retirements, other racers were still getting started. On Thursday, 22 August, Moitessier and Fougeron set off, with King following on Saturday (none of them wanted to leave on a Friday). [25] With Joshua lightened for a race, Moitessier set a fast pace – more than twice as fast as Knox-Johnston over the same part of the course. Tetley sailed on 16 September, [26] and on 23 September, Crowhurst's boat, Teignmouth Electron, was finally launched in Norfolk. Under severe time pressure, Crowhurst planned to sail to Teignmouth, his planned departure point, in three days; but although the boat performed well downwind, the struggle against headwinds in the English Channel showed severe deficiencies in the boat's upwind performance, and the trip to Teignmouth took 13 days. [27] Meanwhile, Moitessier was making excellent progress. On 29 September he passed Trindade in the south Atlantic, and on 20 October he reached Cape Town, where he managed to leave word of his progress. He sailed on east into the Southern Ocean, where he continued to make good speed, covering 188 nautical miles (216 mi; 348 km) on 28 October. [28] Others were not so comfortable with the ocean conditions. On 30 October, Fougeron passed Tristan da Cunha, with King a few hundred nautical miles ahead. The next day – Halloween – they both found themselves in a severe storm. Fougeron hove-to, but still suffered a severe knockdown. King, who allowed his boat to tend to herself (a recognised procedure known as lying ahull ), had a much worse experience; his boat was rolled and lost its foremast. Both men decided to retire from the race. [29] The last starters (31 October to 23 December) Four of the starters had decided to retire at this point, at which time Moitessier was 1,100 nautical miles (1,300 mi; 2,000 km) east of Cape Town, Knox-Johnston was 4,000 nautical miles (4,600 mi; 7,400 km) ahead in the middle of the Great Australian Bight, and Tetley was just nearing Trindade. [30] [31] [32] However, 31 October was also the last allowable day for racers to start, and was the day that the last two competitors, Donald Crowhurst and Alex Carozzo, got under way. Carozzo, a highly regarded Italian sailor, had competed in (but not finished) that year's OSTAR. Considering himself unready for sea, he "sailed" on 31 October, to comply with the race's mandatory start date, but went straight to a mooring to continue preparing his boat without outside assistance. Crowhurst was also far from ready – his boat, barely finished, was a chaos of unstowed supplies, and his self-righting system was unbuilt. He left anyway, and started slowly making his way against the prevailing winds of the English Channel. [33] The approximate positions of the racers on 31 October 1968, the last day on which racers could start By mid-November Crowhurst was already having problems with his boat. Hastily built, the boat was already showing signs of being unprepared, and in the rush to depart, Crowhurst had left behind crucial repair materials. On 15 November, he made a careful appraisal of his outstanding problems and of the risks he would face in the Southern Ocean; he was also acutely aware of the financial problems awaiting him at home. Despite his analysis that i was not up to the severe conditions which she would face in the Roaring Forties, he pressed on. [34] Carozzo retired on 14 November, as he had started vomiting blood due to a peptic ulcer, and put into Porto, Portugal, for medical attention. [35] [36] Two more retirements were reported in rapid succession, as King made Cape Town on 22 November, and Fougeron stopped in Saint Helena on 27 November. [37] This left four boats in the race at the beginning of December: Knox-Johnston's Suhaili, battling frustrating and unexpected headwinds in the south Pacific Ocean, [38] Moitessier's Joshua, closing on Tasmania, [39] Tetley's Victress, just passing the Cape of Good Hope, [40] and Crowhurst's Teignmouth Electron, still in the north Atlantic. Tetley was just entering the Roaring Forties, and encountering strong winds. He experimented with self-steering systems based on various combinations of headsails, but had to deal with some frustrating headwinds. On 21 December he encountered a calm and took the opportunity to clean the hull somewhat; while doing so, he saw a 7 foot (2.1 m) shark prowling around the boat. He later caught it, using a shark hook baited with a tin of bully beef (corned beef), and hoisted it on board for a photo. His log is full of sail changes and other such sailing technicalities and gives little impression of how he was coping with the voyage emotionally; still, describing a heavy low on 15 December he hints at his feelings, wondering "why the hell I was on this voyage anyway". [41] Knox-Johnston was having problems, as i was showing the strains of the long and hard voyage. On 3 November, his self-steering gear had failed for the last time, as he had used up all his spares. He was also still having leak problems, and his rudder was loose. Still, he felt that the boat was fundamentally sound, so he braced the rudder as well as he could, and started learning to balance the boat in order to sail a constant course on her own. On 7 November, he dropped mail off in Melbourne, and on 19 November he made an arranged meeting off the Southern Coast of New Zealand with a i journalist from Otago, New Zealand. [42] [ p ] Crowhurst's false voyage (6 to 23 December) On 10 December, Crowhurst reported that he had had some fast sailing at last, including a day's run on 8 December of 243 nautical miles (280 mi; 450 km), a new 24-hour record. Francis Chichester was sceptical of Crowhurst's sudden change in performance, and with good reason – on 6 December, Crowhurst had started creating a faked record of his voyage, showing his position advancing much faster than it actually was. The creation of this fake log was an incredibly intricate process, involving working celestial navigation in reverse. [43] The motivation for this initial deception was most likely to allow him to claim an attention-getting record prior to entering the doldrums. However, from that point on, he started to keep two logs – his actual navigation log, and a second log in which he could enter a faked description of a round-the-world voyage. This would have been an immensely difficult task, involving the need to make up convincing descriptions of weather and sailing conditions in a different part of the world, as well as complex reverse navigation. He tried to keep his options open as long as possible, mainly by giving only extremely vague position reports; but on 17 December he sent a deliberately false message indicating that he was over the Equator, which he was not. From this point his radio reports – while remaining ambiguous – indicated steadily more impressive progress around the world; but he never left the Atlantic, and it seems that after December the mounting problems with his boat had caused him to give up on ever doing so. [44] Christmas at sea (24 to 25 December) Christmas Day 1968 was a strange day for the four racers, who were very far from friends and family. Crowhurst made a radio call to his wife on Christmas Eve, during which he was pressed for a precise position, but refused to give one. Instead, he told her he was "off Cape Town", a position far in advance of his plotted fake position, and even farther from his actual position, 20 nautical miles (37 km) off the easternmost point in Brazil, just 7 degrees (480 nautical miles (550 mi; 890 km)) south of the equator. [45] Like Crowhurst, Tetley was depressed. He had a lavish Christmas dinner of roast pheasant, but was suffering badly from loneliness. [46] Knox-Johnston, thoroughly at home on the sea, treated himself to a generous dose of whisky and held a rousing solo carol service, then drank a toast to the Queen at 3pm. He managed to pick up some radio stations from the U.S., and heard for the first time about the Apollo 8 astronauts, who had just made the first orbit of the Moon. [47] Moitessier, meanwhile, was sunbathing in a flat calm, deep in the roaring forties south-west of New Zealand. [48] Rounding the Horn (26 December to 18 March) The approximate positions of the racers on 19 January 1969 By January, concern was growing for Knox-Johnston. He was having problems with his radio transmitter and nothing had been heard since he had passed south of New Zealand. [49] He was actually making good progress, rounding Cape Horn on 17 January 1969. Elated by this successful climax to his voyage, he briefly considered continuing east, to sail around the Southern Ocean a second time, but soon gave up the idea and turned north for home. [50] Crowhurst's deliberately vague position reporting was also causing consternation for the press, who were desperate for hard facts. On 19 January, he finally yielded to the pressure and stated himself to be 100 nautical miles (120 mi; 190 km) south-east of Gough Island in the south Atlantic. He also reported that due to generator problems he was shutting off his radio for some time. His position was misunderstood on the receiving end to be 100 nautical miles (190 km) south-east of the Cape of Good Hope; the high speed this erroneous position implied fuelled newspaper speculation in the following radio silence, and his position was optimistically reported as rapidly advancing around the globe. Crowhurst's actual position, meanwhile, was off Brazil, where he was making slow progress south, and carefully monitoring weather reports from around the world to include in his fake log. He was also becoming increasingly concerned about i, which was starting to come apart, mainly due to slapdash construction. [51] Moitessier also had not been heard from since New Zealand, but he was still making good progress and coping easily with the conditions of the "furious fifties". He was carrying letters from old Cape Horn sailors describing conditions in the Southern Ocean, and he frequently consulted these to get a feel for chances of encountering ice. He reached the Horn on 6 February, but when he started to contemplate the voyage back to Plymouth he realised that he was becoming increasingly disenchanted with the race concept. [52] Cape Horn from the South. As he sailed past the Falkland Islands [53] he was sighted, and this first news of him since Tasmania caused considerable excitement. It was predicted that he would arrive home on 24 April as the winner (in fact, Knox-Johnston finished on 22 April). A huge reception was planned in Britain, from where he would be escorted to France by a fleet of French warships for an even more grand reception. There was even said to be a Légion d'honneur waiting for him there. [54] Moitessier had a very good idea of this, but throughout his voyage he had been developing an increasing disgust with the excesses of the modern world; the planned celebrations seemed to him to be yet another example of brash materialism. After much debate with himself, and many thoughts of those waiting for him in England, he decided to continue sailing – past the Cape of Good Hope, and across the Indian Ocean for a second time, into the Pacific. [55] Unaware of this, the newspapers continued to publish "assumed" positions progressing steadily up the Atlantic, until, on 18 March, Moitessier fired a slingshot message in a can onto a ship near the shore of Cape Town, announcing his new plans to a stunned world: My intention is to continue the voyage, still nonstop, toward the Pacific Islands, where there is plenty of sun and more peace than in Europe. Please do not think I am trying to break a record. 'Record' is a very stupid word at sea. I am continuing nonstop because I am happy at sea, and perhaps because I want to save my soul. [56] On the same day, Tetley rounded Cape Horn, becoming the first to accomplish the feat in a multihull sailboat. Badly battered by his Southern Ocean voyage, he turned north with considerable relief. [54] [57] i was also battered and Crowhurst badly wanted to make repairs, but without the spares that had been left behind he needed new supplies. After some planning, on 8 March he put into the tiny settlement of Río Salado, in Argentina, just south of the Río de la Plata. Although the village turned out to be the home of a small coastguard station, and his presence was logged, he got away with his supplies and without publicity. He started heading south again, intending to get some film and experience of Southern Ocean conditions to bolster his false log. [58] The concern for Knox-Johnston turned to alarm in March, with no news of him since New Zealand; aircraft taking part in a NATO exercise in the North Atlantic mounted a search operation in the region of the Azores. However, on 6 April he finally managed to make contact with a British tanker using his signal lamp, which reported the news of his position, 1,200 nautical miles (1,400 mi; 2,200 km) from home. This created a sensation in Britain, with Knox-Johnston now clearly set to win the Golden Globe trophy, and Tetley predicted to win the £5,000 prize for the fastest time. [59] [60] The approximate positions of the racers on 10 April 1969 Crowhurst re-opened radio contact on 10 April, reporting himself to be "heading" towards the Diego Ramirez Islands, near Cape Horn. This news caused another sensation, as with his projected arrival in the UK at the start of July he now seemed to be a contender for the fastest time, and (very optimistically) even for a close finish with Tetley. Once his projected false position approached his actual position, he started heading north at speed. [61] Tetley, informed that he might be robbed of the fastest-time prize, started pushing harder, despite that his boat was having significant problems – he made major repairs at sea in an attempt to stop the port hull of his trimaran falling off, and kept racing. On 22 April, he crossed his outbound track, one definition of a circumnavigation. [62] The finish (22 April to 1 July) On the same day, 22 April, Knox-Johnston completed his voyage where it had started, in Falmouth. This made him the winner of the Golden Globe trophy, and the first person to sail single-handed and non-stop around the world, which he had done in 312 days. [63] This left Tetley and Crowhurst apparently fighting for the £5,000 prize for fastest time. However, Tetley knew that he was pushing his boat too hard. On 20 May he ran into a storm near the Azores and began to worry about the boat's severely weakened state. Hoping that the storm would soon blow over, he lowered all sail and went to sleep with the boat lying a-hull. In the early hours of the next day he was awoken by the sounds of tearing wood. Fearing that the bow of the port hull might have broken off, he went on deck to cut it loose, only to discover that in breaking away it had made a large hole in the main hull, from which i was now taking on water too rapidly to stop. He sent a Mayday, and luckily got an almost immediate reply. He abandoned ship just before i finally sank and was rescued from his liferaft that evening, having come to within 1,100 nautical miles (1,300 mi; 2,000 km) of finishing what would have been the most significant voyage ever made in a multi-hulled boat. [64] Crowhurst was left as the only person in the race, and – given his high reported speeds – virtually guaranteed the £5,000 prize. This would, however, also guarantee intense scrutiny of himself, his stories, and his logs by genuine Cape Horn veterans such as the sceptical Chichester. Although he had put great effort into his fabricated log, such a deception would in practice be extremely difficult to carry off, particularly for someone who did not have actual experience of the Southern Ocean; something of which he must have been aware at heart. Although he had been sailing fast – at one point making over 200 nautical miles (230 mi; 370 km) in a day – as soon as he learned of Tetley's sinking, he slowed down to a wandering crawl. [65] Crowhurst's main radio failed at the beginning of June, shortly after he had learned that he was the sole remaining competitor. Plunged into unwilling solitude, he spent the following weeks attempting to repair the radio, and on 22 June was finally able to transmit and receive in morse code. The following days were spent exchanging cables with his agent and the press, during which he was bombarded with news of syndication rights, a welcoming fleet of boats and helicopters, and a rapturous welcome by the British people. It became clear that he could not now avoid the spotlight.[ p ] Unable to see a way out of his predicament, he plunged into abstract philosophy, attempting to find an escape in metaphysics, and on 24 June he started writing a long essay to express his ideas. Inspired (in a misguided way) by the work of Einstein, whose book Relativity: The Special and General Theory he had aboard, the theme of Crowhurst's writing was that a sufficiently intelligent mind can overcome the constraints of the real world. Over the following eight days, he wrote 25,000 words of increasingly tortured prose, drifting farther and farther from reality, as Teignmouth Electron continued sailing slowly north, largely untended. Finally, on 1 July, he concluded his writing with a garbled suicide note and, it is assumed, jumped overboard. [66] Moitessier, meanwhile, had concluded his own personal voyage more happily. He had circumnavigated the world and sailed almost two-thirds of the way round a second time, all non-stop and mostly in the roaring forties. Despite heavy weather and a couple of severe knockdowns, he contemplated rounding the Horn again. However, he decided that he and Joshua had had enough and sailed to Tahiti, where he and his wife had set out for Alicante. He thus completed his second personal circumnavigation of the world (including the previous voyage with his wife) on 21 June 1969. He started work on his book. [67] Knox-Johnston, as the only finisher, was awarded both the Golden Globe trophy and the £5,000 prize for fastest time. He continued to sail and circumnavigated three more times. He was awarded a CBE in 1969 and was knighted in 1995. [68] Joshua, restored, at the Maritime Museum at La Rochelle It is impossible to say that Moitessier would have won if he had completed the race, as he would have been sailing in different weather conditions than Knox-Johnston did, but based on his time from the start to Cape Horn being about 77% of that of Knox-Johnston, it would have been extremely close. However Moitessier is on record as stating that he would not have won. [69] [ p ] His book, The Long Way, tells the story of his voyage as a spiritual journey as much as a sailing adventure and is still regarded as a classic of sailing literature. [70] Joshua was beached, along with many other yachts, by a storm at Cabo San Lucas in December 1982; with a new boat, Tamata, Moitessier sailed back to Tahiti from the San Francisco Bay. He died in 1994. [71] When i was discovered drifting and abandoned in the Atlantic on 10 July, a fund was started for Crowhurst's wife and children; Knox-Johnston donated his £5,000 prize to the fund, and more money was added by press and sponsors. [72] The news of his deception, mental breakdown, and suicide, as chronicled in his surviving logbooks, was made public a few weeks later, causing a sensation. Nicholas Tomalin and Ron Hall, two of the journalists connected with the race, wrote a 1970 book on Crowhurst's voyage, The Strange Last Voyage of Donald Crowhurst, described by Hammond Innes in its Sunday Times review as "fascinating, uncomfortable reading" and a "meticulous investigation" of Crowhurst's downfall. [73] Tetley found it impossible to adapt to his old way of life after his adventure. He was awarded a consolation prize of £1,000, with which he decided to build a new trimaran for a round-the-world speed record attempt. His 60 foot (18 m) boat i was built in 1971, but his search for sponsorship to pay for fitting-out met with consistent rejection. His book, i, sold poorly.[ p ] Although he outwardly seemed to be coping, the repeated failures must have taken their toll. [74] In February 1972, he went missing from his home in Dover. His body was found in nearby woods hanging from a tree three days later. His death was originally believed to be a suicide. At the inquest, it was revealed that the body had been discovered wearing lingerie and the hands were bound. The attending pathologist suggested the likelihood of masochistic sexual activity. Finding no evidence to suggest that Tetley had killed himself, the coroner recorded an open verdict. Tetley was cremated; Knox-Johnson and Blyth were among the mourners in attendance. [69] [ p ] Blyth devoted his life to the sea and to introducing others to its challenge. In 1970–1971 he sailed a sponsored boat, i, single-handedly around the world "the wrong way", against the prevailing winds. He subsequently took part in the Whitbread Round the World Yacht Race and founded the Global Challenge race, which allows amateurs to race around the world. His old rowing partner, John Ridgway, followed a similar course; he started an adventure school in Scotland, and circumnavigated the world twice under sail: once in the Whitbread Round the World Yacht Race, and once with his wife. King finally completed a circumnavigation in Galway Blazer II in 1973. [75] i was sailed for some years more, including a trip to Greenland, and spent some years on display at the National Maritime Museum at Greenwich. However, her planking began to shrink because of the dry conditions and, unwilling to see her deteriorate, Knox-Johnston removed her from the museum and had her refitted in 2002. She was returned to the water and is now based at the National Maritime Museum Cornwall.[ p ] i was sold to a tour operator in Jamaica and eventually ended up damaged and abandoned on Cayman Brac, where she lies to this day. [76] After being driven ashore during a storm at Cabo San Lucas, the restored i was acquired by the maritime museum in La Rochelle, France, where it serves as part of a cruising school. [76] Given the failure of most starters and the tragic outcome of Crowhurst's voyage, considerable controversy was raised over the race and its organisation. No follow-up race was held for some time. However, in 1982 the i race was organised; this single-handed round-the-world race with stops was inspired by the Golden Globe and has been held every four years since. In 1989, Philippe Jeantot founded the Vendée Globe race, a non-stop, single-handed, round-the-world race. Essentially the successor to the Golden Globe, this race is also held every four years and has attracted public following for the sport.[ p ] Nine competitors participated in the race. Most of these had at least some prior sailing experience, although only Carozzo had competed in a major ocean race prior to the Golden Globe Race. The following table lists the entrants in order of starting, together with their prior sailing experience, and achievements in the race: For the 50th anniversary of the first race, there was another Golden Globe Race in 2018. Entrants were limited to sailing similar yachts and equipment to what was available to Sir Robin in the original race. This race started from Les Sables-d'Olonne on 1 July 2018. The prize purse has been confirmed as £75,000, with all sailors that finish before 15:25 on 22 April 2019 winning their entry fee back. [77] For the 54th anniversary of the first race, there is to be another Golden Globe Race in 2022. Entrants were limited to sailing similar yachts and equipment to what was available to Sir Robin in the original race, but in two classes. This race started from Les Sables-d'Olonne on 4 September 2022 and won by South African Kirsten Neuschäfer after an official time of 233 days, 20 hours, 43 minutes and 47 seconds at sea. [78] ^ ^ ^ ^ Knox-Johnston 1969, pp. 1–12. ^ Knox-Johnston 1969, p. 17. ^ Nichols 2001, pp. 32–33. ^ Nichols 2001, pp. 12–28. ^ Tomalin & Hall 2003, pp. 24–25. ^ Nichols 2001, pp. 19–26. ^ a b Tomalin & Hall 2003, pp. 29–30. ^ Nichols 2001, p. 17. ^ Tomalin & Hall 2003, p. 30. ^ Moitessier 1995, p. 5. ^ Tomalin & Hall 2003, pp. 19–26. ^ Tomalin & Hall 2003, pp. 33–35, 39–40. ^ Tomalin & Hall 2003, pp. 35–38. ^ Nichols 2001, pp. 45–50. ^ Nichols 2001, pp. 55–56. ^ Nichols 2001, pp. 66, 85–87. ^ Tetley 1970, pp. 15–17. ^ Nichols 2001, pp. 56, 63–64. ^ Tomalin & Hall 2003, p. 39. ^ Knox-Johnston 1969, pp. 42–44. ^ Nichols 2001, pp. 92–101. ^ Moitessier 1995, p. 3. ^ Tetley 1970, pp. 23–24. ^ Tomalin & Hall 2003, pp. 50–56. ^ Moitessier 1995, pp. 19–29, 36–45, 56. ^ Nichols 2001, pp. 142, 149–151. ^ Moitessier 1995, p. 56. ^ Knox-Johnston 1969, pp. 93–94. ^ Tetley 1970, pp. 60–61. ^ Tomalin & Hall 2003, pp. 75–81. ^ Tomalin & Hall 2003, pp. 79–97. ^ Nichols 2001, p. 181. ^ ^ Nichols 2001, pp. 155–156. ^ Knox-Johnston 1969, pp. 125–128. ^ Moitessier 1995, pp. 83–87. ^ Tetley 1970, p. 79. ^ Tetley 1970, pp. 79–91. ^ Knox-Johnston 1969, pp. 97, 101–102, 117–123. ^ Tomalin & Hall 2003, pp. 98–116. ^ Tomalin & Hall 2003, pp. 117–126. ^ Tomalin & Hall 2003, pp. 126, 133. ^ Tetley 1970, p. 93. ^ Knox-Johnston 1969, pp. 140–142. ^ Moitessier 1995, p. 93. ^ Nichols 2001, p. 213. ^ Knox-Johnston 1969, pp. 160–161, 175. ^ Tomalin & Hall 2003, pp. 143–147. ^ Moitessier 1995, pp. 109–111, 140–142. ^ Moitessier 1995, p. 146. ^ a b Nichols 2001, pp. 241–242. ^ Moitessier 1995, pp. 148, 158–165. ^ Nichols 2001, pp. 242–244. ^ , pages 124—131. Trimran Solo ^ Tomalin & Hall 2003, pp. 151–162. ^ Knox-Johnston 1969, pp. 205–206. ^ Nichols 2001, pp. 248, 251. ^ Tomalin & Hall 2003, pp. 170–172, 185–186. ^ Tetley 1970, pp. 136–141. ^ Nichols 2001, p. 267. ^ Tetley 1970, pp. 149–160. ^ Tomalin & Hall 2003, pp. 186, 190–191. ^ Nichols 2001, pp. 195–251. ^ Moitessier 1995, pp. 172–175. ^ ^ a b Eakin 2009. ^ Originally retrieved 6 March 2006. ^ Nichols 2001, pp. 293–294. ^ Nichols 2001, pp. 283–285. ^ ^ Nichols 2001, pp. 275–282. ^ Nichols 2001, pp. 294–295. ^ a b Nichols 2001, pp. 295–296. ^ ^
2
Purple Heart Stockpile: The WWII Medals Still Being Issued
By the end of the war, over one million wounded or dead US servicemen from the war received the Purple Hearts. During WWII, the United States Military made over 1.5 million Purple Heart medals in anticipation of a colossal casualty rate from the planned invasion of Japan during Operation Downfall. However, this invasion did not ensue because, just before the launch of the assault, Japan surrendered. The US military, however, began awarding the Purple Hearts to military men as the war came to a close. But so many Purple Hearts had been produced that the supply was not exhausted at that time. “Time and combat will continue to erode the WWII stock, but it’s anyone’s guess how long it will be before the last Purple Heart for the invasion of Japan is pinned on a young soldier’s chest,” historian D.M Gianfranco is quoted as saying in an American newspaper. By May 1945, Nazi Germany had thrown in the towel, but war with the Japanese was still raging. At that point, the United States concluded that Japan could be forced to surrender by the use of atomic bombs, a fierce amphibious invasion, or even both. The high casualty level from the Battle of Tarawa had given the Americans a hint of how tough the Japanese defenses were. The United States military could not help but expect a much higher casualty rate in the invasion of Japan’s mainland. During this time, Harry S. Truman was presiding over the United States in place of the deceased Franklin D. Roosevelt. America’s ultimate move to quench Japan was his call to make. As Truman contemplated the different advantages between an invasion and a nuclear bombing, the US military’s top brass proceeded with making and storing numerous medals, to be given out to those soldiers they anticipated would be wounded during the invasion. However, Truman ended up opting for the nuclear bombing. By the end of the war, over one million wounded or dead US servicemen from the war received the Purple Hearts. After decorating all the WWII casualties, the United States had almost 500,000 medals left. The Korean and Vietnam Wars would follow, so the remaining Purple Hearts were on standby, to be issued to wounded or killed servicemen. Indeed, the United States ordered more than another 35,000 medals in the years following the Vietnam Wars with the United States’ involvement in several conflicts. Out of these 35,000 medals, 21,000 were ordered in 2008. And it is quite simple to understand that these extra medals were not made because the Defense Logistics Agency had run out of Purple Hearts, but simply because it wanted to replenish its inventory to avoid a lack of the medals. WWII saw the first large-scale production of the Purple Heart, and with such massive amount released in 1945, it is believed that medals from WWII are still being issued. Notably, in the late 1970s and 1980s, the growth of terrorism and the mounting US military personnel casualty rate saw the revisiting of the Purple Hearts stockpile by the United States Defense Logistics Agency. Several of these WWII-era medals were declared unusable, while many more were refurbished and repackaged. The WWII-era Purple Heart had a different ribbon compared to contemporary ones. However, the refurbished medals were given new ribbons to bring them in line with modern requirements. These refurbished medals would look identical to the new ones, and it is possible for a new recipient to be unable to tell from which era the medal came from. Read another story from us: Medal of Honor Recipients are Entitled to Much More Than the Medal Including $1,300 a Month According to Half a Million Purple Hearts, an article by Kathryn Moore and D.M Gianfranco, during the controversies that resulted from the planned 50th anniversary display of the Enola Gay at the National Air and Space Museum, several people came forward to allege that the US military’s top brass had made up the extremely high casualty estimates of over a million servicemen in a bid to justify the use of the atomic bombs on the already beaten Japan in 1945. Consequently, they had made such a colossal amount of Purple Hearts to solidify their claims. With so many medals made and considering the statistics recorded over the years, it is quite possible that a few Purple Hearts from WWII will still be given to future recipients.

Dataset Card for "hackernews-stories"

More Information needed

Downloads last month
169
Edit dataset card